Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- -9A0T4oBgHgl3EQfPP_v/vector_store/index.faiss +3 -0
- -NE5T4oBgHgl3EQfRg5P/content/2301.05521v1.pdf +3 -0
- .gitattributes +34 -0
- 0NAyT4oBgHgl3EQf0_na/content/tmp_files/2301.00729v1.pdf.txt +1646 -0
- 0NAyT4oBgHgl3EQf0_na/content/tmp_files/load_file.txt +0 -0
- 19E3T4oBgHgl3EQfngq-/content/tmp_files/2301.04626v1.pdf.txt +1251 -0
- 19E3T4oBgHgl3EQfngq-/content/tmp_files/load_file.txt +0 -0
- 1dAzT4oBgHgl3EQfDfo-/vector_store/index.faiss +3 -0
- 29FST4oBgHgl3EQfYTgs/vector_store/index.faiss +3 -0
- 2NE4T4oBgHgl3EQfagwY/content/2301.05064v1.pdf +3 -0
- 2NE4T4oBgHgl3EQfagwY/vector_store/index.faiss +3 -0
- 2NE4T4oBgHgl3EQfagwY/vector_store/index.pkl +3 -0
- 4tE4T4oBgHgl3EQf1A0X/vector_store/index.faiss +3 -0
- 5NAyT4oBgHgl3EQfpPjF/content/tmp_files/2301.00523v1.pdf.txt +1189 -0
- 5NAyT4oBgHgl3EQfpPjF/content/tmp_files/load_file.txt +0 -0
- 6tE4T4oBgHgl3EQfCAsz/content/tmp_files/2301.04856v1.pdf.txt +0 -0
- 6tE4T4oBgHgl3EQfCAsz/content/tmp_files/load_file.txt +0 -0
- 79A0T4oBgHgl3EQfOf-b/vector_store/index.faiss +3 -0
- 7tE1T4oBgHgl3EQfBwI8/vector_store/index.pkl +3 -0
- 7tE2T4oBgHgl3EQfPgY3/content/tmp_files/2301.03759v1.pdf.txt +1407 -0
- 7tE2T4oBgHgl3EQfPgY3/content/tmp_files/load_file.txt +0 -0
- 8dAzT4oBgHgl3EQfgfzo/vector_store/index.pkl +3 -0
- 8tFRT4oBgHgl3EQfpzcC/content/2301.13614v1.pdf +3 -0
- 8tFRT4oBgHgl3EQfpzcC/vector_store/index.faiss +3 -0
- 9dA0T4oBgHgl3EQfO_9E/content/tmp_files/2301.02168v1.pdf.txt +0 -0
- 9dA0T4oBgHgl3EQfO_9E/content/tmp_files/load_file.txt +0 -0
- AdFQT4oBgHgl3EQfMTYr/content/tmp_files/2301.13267v1.pdf.txt +1613 -0
- AdFQT4oBgHgl3EQfMTYr/content/tmp_files/load_file.txt +0 -0
- CdFAT4oBgHgl3EQftB4M/vector_store/index.pkl +3 -0
- CtE1T4oBgHgl3EQf9wbT/vector_store/index.pkl +3 -0
- GNE1T4oBgHgl3EQf_AZD/content/tmp_files/2301.03575v1.pdf.txt +3147 -0
- GNE1T4oBgHgl3EQf_AZD/content/tmp_files/load_file.txt +0 -0
- HdAzT4oBgHgl3EQfUvz7/content/tmp_files/2301.01274v1.pdf.txt +801 -0
- HdAzT4oBgHgl3EQfUvz7/content/tmp_files/load_file.txt +387 -0
- JNFIT4oBgHgl3EQfZCuh/content/2301.11251v1.pdf +3 -0
- JdE2T4oBgHgl3EQfUgew/vector_store/index.faiss +3 -0
- KdFOT4oBgHgl3EQfzDRI/content/2301.12930v1.pdf +3 -0
- MNE4T4oBgHgl3EQfiw29/content/tmp_files/2301.05137v1.pdf.txt +2136 -0
- MNE4T4oBgHgl3EQfiw29/content/tmp_files/load_file.txt +0 -0
- MdE1T4oBgHgl3EQftQUl/content/2301.03374v1.pdf +3 -0
- OtAyT4oBgHgl3EQfUfen/vector_store/index.faiss +3 -0
- OtAyT4oBgHgl3EQfUfen/vector_store/index.pkl +3 -0
- PdE0T4oBgHgl3EQfkAEs/content/2301.02466v1.pdf +3 -0
- PdE0T4oBgHgl3EQfkAEs/vector_store/index.pkl +3 -0
- PtAyT4oBgHgl3EQf7fqt/vector_store/index.pkl +3 -0
- QNFJT4oBgHgl3EQf2i1I/vector_store/index.pkl +3 -0
- QNFRT4oBgHgl3EQfJje8/content/tmp_files/2301.13496v1.pdf.txt +1447 -0
- QNFRT4oBgHgl3EQfJje8/content/tmp_files/load_file.txt +0 -0
- QtFJT4oBgHgl3EQfJyy0/content/tmp_files/2301.11462v1.pdf.txt +2310 -0
- QtFJT4oBgHgl3EQfJyy0/content/tmp_files/load_file.txt +0 -0
-9A0T4oBgHgl3EQfPP_v/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e305e359d81679c75fcb2d2df70990a3492235ccdc37b086d64fae53c0263d45
|
| 3 |
+
size 3604525
|
-NE5T4oBgHgl3EQfRg5P/content/2301.05521v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:08d21cc31de748753afdcc2d37b2ca9b3df2dd167d4fc0332295bd171df33756
|
| 3 |
+
size 753206
|
.gitattributes
CHANGED
|
@@ -7130,3 +7130,37 @@ edE1T4oBgHgl3EQfLgP4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
|
|
| 7130 |
ctE5T4oBgHgl3EQfEw7I/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7131 |
b9FIT4oBgHgl3EQfmyvP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7132 |
FdAyT4oBgHgl3EQfe_hY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7130 |
ctE5T4oBgHgl3EQfEw7I/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7131 |
b9FIT4oBgHgl3EQfmyvP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7132 |
FdAyT4oBgHgl3EQfe_hY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7133 |
+
8tFRT4oBgHgl3EQfpzcC/content/2301.13614v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7134 |
+
8tFRT4oBgHgl3EQfpzcC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7135 |
+
MdE1T4oBgHgl3EQftQUl/content/2301.03374v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7136 |
+
VNFKT4oBgHgl3EQfmS5u/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7137 |
+
4tE4T4oBgHgl3EQf1A0X/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7138 |
+
1dAzT4oBgHgl3EQfDfo-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7139 |
+
edE1T4oBgHgl3EQfLgP4/content/2301.02979v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7140 |
+
2NE4T4oBgHgl3EQfagwY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7141 |
+
2NE4T4oBgHgl3EQfagwY/content/2301.05064v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7142 |
+
aNE0T4oBgHgl3EQf4QKS/content/2301.02736v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7143 |
+
dNE5T4oBgHgl3EQfgA8L/content/2301.05630v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7144 |
+
dNE5T4oBgHgl3EQfgA8L/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7145 |
+
VNFKT4oBgHgl3EQfmS5u/content/2301.11857v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7146 |
+
YtE1T4oBgHgl3EQfcQR-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7147 |
+
JdE2T4oBgHgl3EQfUgew/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7148 |
+
OtAyT4oBgHgl3EQfUfen/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7149 |
+
KdFOT4oBgHgl3EQfzDRI/content/2301.12930v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7150 |
+
ldAyT4oBgHgl3EQf_voI/content/2301.00912v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7151 |
+
-9A0T4oBgHgl3EQfPP_v/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7152 |
+
rdFJT4oBgHgl3EQfbiw2/content/2301.11539v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7153 |
+
rdFJT4oBgHgl3EQfbiw2/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7154 |
+
29FST4oBgHgl3EQfYTgs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7155 |
+
79A0T4oBgHgl3EQfOf-b/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7156 |
+
RtAyT4oBgHgl3EQf7_pZ/content/2301.00848v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7157 |
+
mdFPT4oBgHgl3EQf4TUa/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7158 |
+
etE2T4oBgHgl3EQfxghO/content/2301.04111v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7159 |
+
ldAyT4oBgHgl3EQf_voI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 7160 |
+
hNAyT4oBgHgl3EQf-voB/content/2301.00895v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7161 |
+
ntE5T4oBgHgl3EQfIA7J/content/2301.05446v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7162 |
+
-NE5T4oBgHgl3EQfRg5P/content/2301.05521v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7163 |
+
JNFIT4oBgHgl3EQfZCuh/content/2301.11251v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7164 |
+
PdE0T4oBgHgl3EQfkAEs/content/2301.02466v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7165 |
+
g9AzT4oBgHgl3EQf4P7n/content/2301.01843v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7166 |
+
q9E3T4oBgHgl3EQf8gt0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
0NAyT4oBgHgl3EQf0_na/content/tmp_files/2301.00729v1.pdf.txt
ADDED
|
@@ -0,0 +1,1646 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
A Closed-Form EVSI Expression for a Multinomial
|
| 2 |
+
Data-Generating Process
|
| 3 |
+
Adam Fleischhacker∗, Pak-Wing Fok†, Mokshay Madiman‡, Nan Wu§
|
| 4 |
+
January 3, 2023
|
| 5 |
+
Abstract
|
| 6 |
+
This paper derives analytic expressions for the expected value of
|
| 7 |
+
sample information (EVSI), the expected value of distribution informa-
|
| 8 |
+
tion (EVDI), and the optimal sample size when data consists of inde-
|
| 9 |
+
pendent draws from a bounded sequence of integers. Due to challenges
|
| 10 |
+
of creating tractable EVSI expressions, most existing work valuing data
|
| 11 |
+
does so in one of three ways: 1) analytically through closed-form ex-
|
| 12 |
+
pressions on the upper bound of the value of data, 2) calculating the
|
| 13 |
+
expected value of data using numerical comparisons of decisions made
|
| 14 |
+
using simulated data to optimal decisions where the underlying data
|
| 15 |
+
distribution is known, or 3) using variance reduction as proxy for the
|
| 16 |
+
uncertainty reduction that accompanies more data. For the very flex-
|
| 17 |
+
ible case of modelling integer-valued observations using a multinomial
|
| 18 |
+
data-generating process with Dirichlet prior, this paper develops ex-
|
| 19 |
+
pressions that 1) generalize existing beta-Binomial computations, 2)
|
| 20 |
+
do not require prior knowledge of some underlying “true” distribution,
|
| 21 |
+
and 3) can be computed prior to the collection of any sample data.
|
| 22 |
+
1
|
| 23 |
+
Introduction
|
| 24 |
+
The seminal work of [34] introduced preposterior analysis, a Bayesian recipe
|
| 25 |
+
for estimating the value of information (VOI) prior to knowing the informa-
|
| 26 |
+
∗Department of Business Administration, University of Delaware, Newark, DE 19716,
|
| 27 |
+
email: ajf@udel.edu
|
| 28 |
+
†Department of Mathematical Sciences, University of Delaware, Newark, DE 19716,
|
| 29 |
+
email: pakwing@udel.edu
|
| 30 |
+
‡Department of Mathematical Sciences, University of Delaware, Newark, DE 19716,
|
| 31 |
+
email: madiman@udel.edu
|
| 32 |
+
§Institute for Financial Services Analytics, University of Delaware, Newark, DE 19716,
|
| 33 |
+
email: nanw@udel.edu
|
| 34 |
+
1
|
| 35 |
+
arXiv:2301.00729v1 [stat.ME] 2 Dec 2022
|
| 36 |
+
|
| 37 |
+
tion’s content. The expected value of sample information (EVSI), a particu-
|
| 38 |
+
larly valuable VOI computation, values the information contained in sample
|
| 39 |
+
observations prior to their collection. [34] include many closed-form and oft-
|
| 40 |
+
used expressions for calculating EVSI under the assumption of quadratic
|
| 41 |
+
loss. One such expression is for a Bernoulli data-generating process with
|
| 42 |
+
beta prior distribution (a.k.a.
|
| 43 |
+
a beta-Binomial model); each observation
|
| 44 |
+
being either zero or one [34, Table 6.2, p. 191]. In this paper, we gener-
|
| 45 |
+
alize the beta-binomial EVSI expression beyond binary-valued observations
|
| 46 |
+
to the case where each data point is drawn from a bounded sequence of
|
| 47 |
+
integers. These results expand the availability of tractable VOI expressions
|
| 48 |
+
to a useful scenario where previously value could only be approximated or
|
| 49 |
+
bounded when a closed-form expression was needed.
|
| 50 |
+
Depending on a modeler’s choices of actions, states of uncertainty, loss
|
| 51 |
+
(or utility) functions, and probability models, tractable calculations of VOI
|
| 52 |
+
may exist, but intractable formulations, especially for EVSI, are much more
|
| 53 |
+
common. In fact, reputed statistician Dennis Lindley has remarked that
|
| 54 |
+
the question of sample size “is embarrassingly difficult to answer” due to
|
| 55 |
+
difficulties calculating EVSI [26]. More generally, [14] shows that simply
|
| 56 |
+
characterizing the relationship between information and value is challeng-
|
| 57 |
+
ing; [14]’s work dispels the idea that information value will reliably exhibit
|
| 58 |
+
monotonic relationships with information value determinants such as action
|
| 59 |
+
flexibility, risk aversion, or a decision maker’s wealth.
|
| 60 |
+
While for some EVSI and VOI problems, closed-form solutions are at-
|
| 61 |
+
tainable [34, 5, 4], value of information solutions are often difficult to for-
|
| 62 |
+
mulate.
|
| 63 |
+
Hence, many papers are known for their ability to characterize
|
| 64 |
+
aspects of VOI expressions such as the distributional properties of the ex-
|
| 65 |
+
pected value of perfect information (EVPI) [28], the impact of an exogenous
|
| 66 |
+
variable on EVPI [20], and the additivity of information value when multi-
|
| 67 |
+
ple sources of uncertainty exist [21]. EVSI calculations, in particular, often
|
| 68 |
+
result in intractable expressions of multiple integrals where only numerical
|
| 69 |
+
methods can yield results [25]. Even then, many numerical methods still
|
| 70 |
+
require further simplifying assumptions (see, e.g., [36]). While it is possi-
|
| 71 |
+
ble to approximate VOI computations via normal approximations (see, e.g.,
|
| 72 |
+
[30, 19]) or using a computationally intense simulation-based methodology
|
| 73 |
+
(see, e.g., [10, 37]), closed-form expressions yield instantaneous and accurate
|
| 74 |
+
value computations with more interpretable insights regarding the effects of
|
| 75 |
+
prior beliefs and sample sizes.
|
| 76 |
+
In this paper, we provide a new EVSI calculation for a flexible (i.e. multi-
|
| 77 |
+
nomial) data-generating process that adheres to three desiderata outlined
|
| 78 |
+
in [34, p.44]:
|
| 79 |
+
2
|
| 80 |
+
|
| 81 |
+
Tractable
|
| 82 |
+
EVSI is easily calculated using a closed-
|
| 83 |
+
form expression.
|
| 84 |
+
Rich
|
| 85 |
+
A decision maker’s prior beliefs and in-
|
| 86 |
+
formation are readily incorporated as part
|
| 87 |
+
of the calculation.
|
| 88 |
+
Interpretable
|
| 89 |
+
The expression for EVSI provides insight
|
| 90 |
+
as to the effects of prior beliefs and sam-
|
| 91 |
+
ple size choices on the expected value of
|
| 92 |
+
a sample.
|
| 93 |
+
Generating Process
|
| 94 |
+
Conjugate Prior
|
| 95 |
+
Source
|
| 96 |
+
Bernoulli(θ)
|
| 97 |
+
(θ) ∼Beta
|
| 98 |
+
[34]
|
| 99 |
+
[32]
|
| 100 |
+
Poisson(λ)
|
| 101 |
+
λ ∼gamma
|
| 102 |
+
[34]
|
| 103 |
+
Normal(µ, σ)
|
| 104 |
+
µ ∼ Normal, σ known
|
| 105 |
+
[34]
|
| 106 |
+
µ known, σ2 ∼ inv. Gamma
|
| 107 |
+
[34]
|
| 108 |
+
σ2 ∼ inv. Gamma, µ|σ2 ∼ Normal
|
| 109 |
+
[34]
|
| 110 |
+
Multinomial(t)1
|
| 111 |
+
t ∼ Dirichlet
|
| 112 |
+
This Paper
|
| 113 |
+
Table 1: Position of this paper in comparison to other tractable EVSI cal-
|
| 114 |
+
culations.
|
| 115 |
+
Shown in Table 1, our point of departure is generalizing the EVSI cal-
|
| 116 |
+
culation for a Bernoulli data-generating process with beta prior (a.k.a. a
|
| 117 |
+
beta-binomial model) to the case of a multinomial data generating process
|
| 118 |
+
with Dirichlet prior. Rich treatment and illustrative examples surround-
|
| 119 |
+
ing EVSI calculations for the beta-binomial conjugacy can be found in [15].
|
| 120 |
+
Additionally, [32] provide explicit closed-form value of information compu-
|
| 121 |
+
tations for the beta-binomial case and is very close in spirit to this work,
|
| 122 |
+
but does not investigate the Dirichlet-multinomial setting. In relation to
|
| 123 |
+
the multinomial sampling process we explore in this paper, existing work
|
| 124 |
+
has focused on non-utility based approaches where data is valued based on
|
| 125 |
+
its ability to bound a parameter of interest within a certain level of preci-
|
| 126 |
+
sion [1, 6]. Our approach, in contrast, extends the utility-based valuation
|
| 127 |
+
of sampling to a multinomial sampling environment to yield closed-form
|
| 128 |
+
expressions for both EVSI and the expected value of distribution informa-
|
| 129 |
+
tion (EVDI). Publication of analytically tractable expressions will be able
|
| 130 |
+
3
|
| 131 |
+
|
| 132 |
+
to supplant the still-present usage of Monte Carlo simulation in multinomial
|
| 133 |
+
settings (see, e.g., [38]).
|
| 134 |
+
When closed-form EVSI expressions are unavailable, quantification of
|
| 135 |
+
value created through uncertainty reduction typically relies on one of three
|
| 136 |
+
techniques: 1) closed-form expressions on the upper bound of the value of
|
| 137 |
+
data, 2) simulated comparisons between valuing decisions made by an or-
|
| 138 |
+
acle who knows the underlying data distribution to decisions made by a
|
| 139 |
+
less-informed decision maker, or 3) using variance-reduction as a proxy for
|
| 140 |
+
how data reduces underlying uncertainty in the data-generating process. For
|
| 141 |
+
examples of the first type, [27] bound EVPI for a risk-averse decision maker
|
| 142 |
+
and [40] place an upper bound on the value of knowing the true distribu-
|
| 143 |
+
tion when one already knows the mean and variance of that distribution.
|
| 144 |
+
Examples of the second type often compare a Bayesian updating procedure
|
| 145 |
+
to a known optimal solution [8, 29, 7, 35]. Lastly, computing the value of
|
| 146 |
+
variance reduction independent of the specific quantity of data is also seen
|
| 147 |
+
within the literature [11, 22].
|
| 148 |
+
2
|
| 149 |
+
Problem Setup
|
| 150 |
+
Despite substantial efforts, notation for preposterior analysis has not been
|
| 151 |
+
standardized and is often a matter of personal taste [33]. To aid the reader
|
| 152 |
+
with this paper’s notation surrounding its random variables and their real-
|
| 153 |
+
izations, we present the following summary breaking the notation into three
|
| 154 |
+
levels of analysis:
|
| 155 |
+
1.
|
| 156 |
+
Data/Sample.
|
| 157 |
+
Data is an integer-valued random variable with support
|
| 158 |
+
{0, 1, . . . , M}. Sample is a random vector referring to either a sequence of n data
|
| 159 |
+
observations or a vector of counts representing the number of occurrences of each
|
| 160 |
+
potential data value recorded in n observations.
|
| 161 |
+
D:
|
| 162 |
+
A random variable representing a single data observation.
|
| 163 |
+
d:
|
| 164 |
+
A single realization of D with integer valued support: d ∈ {0, 1, . . . , M}.
|
| 165 |
+
X ≡ (X1, . . . , Xn):
|
| 166 |
+
A random vector of n observations of D.
|
| 167 |
+
x ≡ (x1, . . . , xn):
|
| 168 |
+
A realization of data vector X.
|
| 169 |
+
Dn:
|
| 170 |
+
The support of X when n realizations are observed.
|
| 171 |
+
nk:
|
| 172 |
+
the number of times that k ∈ {0, 1, . . . , M} appears in x.
|
| 173 |
+
(n0, n1, . . . , nM):
|
| 174 |
+
A vector of counts of occurrences for each potential data value.
|
| 175 |
+
2. Data/Sampling Distributions. Data and sampling distributions are iden-
|
| 176 |
+
tical terms referring to the probability distribution governing the data-generating
|
| 177 |
+
process. Data distribution refers to generating individual data points and sampling
|
| 178 |
+
1With support interpreted as a sequence of integer values.
|
| 179 |
+
4
|
| 180 |
+
|
| 181 |
+
distribution preferred when talking about a sequence of observations.
|
| 182 |
+
T ≡ (T0, T1, . . . , TM):
|
| 183 |
+
A random vector representing a data distribution.
|
| 184 |
+
Random elements Tk are data distribution parameters
|
| 185 |
+
representing the probability of data realization being k.
|
| 186 |
+
t ≡ (t0, t1, . . . , tM):
|
| 187 |
+
A realization of random vector T such that
|
| 188 |
+
tk = p(D = k) for k ∈ {0, 1, . . . , M}.
|
| 189 |
+
t∗:
|
| 190 |
+
The “true” data distribution or sampling distribution;
|
| 191 |
+
only knowable by an oracle.
|
| 192 |
+
T :
|
| 193 |
+
The space or set of all possible data distributions. T, t, t∗ ∈ T .
|
| 194 |
+
3. Prior/Posterior Distributions. Continuous multivariate probability distri-
|
| 195 |
+
butions with domain of all possible data distributions.
|
| 196 |
+
π:
|
| 197 |
+
A prior from which data distributions are generated.
|
| 198 |
+
πX:
|
| 199 |
+
A posterior that updates π in light of data X.
|
| 200 |
+
2.1
|
| 201 |
+
Modelling Data and Loss
|
| 202 |
+
Consider a data-generating process that generates independent and identi-
|
| 203 |
+
cally distributed samples from a bounded sequence of M + 1 integers. For
|
| 204 |
+
notational simplicity, we rescale the sequence to be [M] ≡ {0, 1, . . . , M}. For
|
| 205 |
+
practical motivation, the data could represent product demand and the goal
|
| 206 |
+
is to make accurate predictions for inventory control [39]. For the specific
|
| 207 |
+
case of demand uncertainty, we note that there are asymmetric and other
|
| 208 |
+
loss functions that would be preferred to the quadratic loss function used
|
| 209 |
+
here, but closed-form expressions are not forthcoming for those cases.
|
| 210 |
+
The data-generating process is governed by an unknown data distribu-
|
| 211 |
+
tion, t, with discrete-finite support [M]. Thus the statistical model for the
|
| 212 |
+
data-generating process is parameterized by the standard M-dimensional
|
| 213 |
+
simplex of probabilities
|
| 214 |
+
T = {t = (t0, . . . , tM) ∈ RM+1
|
| 215 |
+
+
|
| 216 |
+
: t0 + . . . + tM = 1};
|
| 217 |
+
this infinite (but finite-dimensional) parameter space describes how we are
|
| 218 |
+
labeling the potential data distributions. If the sample size of the data is n,
|
| 219 |
+
we have n values x1, . . . , xn ∈ [M] being generated by the data-generating
|
| 220 |
+
process.
|
| 221 |
+
For a given t ∈ T , the associated data-generating process p(n)
|
| 222 |
+
t
|
| 223 |
+
assigns probability
|
| 224 |
+
p(n)
|
| 225 |
+
t
|
| 226 |
+
(x1, . . . , xn) =
|
| 227 |
+
n
|
| 228 |
+
�
|
| 229 |
+
i=1
|
| 230 |
+
txi
|
| 231 |
+
(1)
|
| 232 |
+
5
|
| 233 |
+
|
| 234 |
+
to this particular sequence of data values. In particular, if the sample size
|
| 235 |
+
is 1, the data-generating process is simply given by
|
| 236 |
+
pt(d) ≡ p(1)
|
| 237 |
+
t (d) = td,
|
| 238 |
+
d ∈ [M].
|
| 239 |
+
It is clear that the number of occurrences of particular data values in the
|
| 240 |
+
sample is a sufficient statistic for the model described, and that the sam-
|
| 241 |
+
pling distribution for this sufficient statistic is just the multinomial model.
|
| 242 |
+
Specifically, if nd = |{1 ≤ i ≤ n : xi = d}|, then (n0, . . . , nM) is a sufficient
|
| 243 |
+
statistic, and we have, with obvious abuse of notation,
|
| 244 |
+
pt(n0, . . . , nM) =
|
| 245 |
+
�
|
| 246 |
+
n
|
| 247 |
+
n0 · · · nM
|
| 248 |
+
� M
|
| 249 |
+
�
|
| 250 |
+
d=0
|
| 251 |
+
tnd
|
| 252 |
+
d .
|
| 253 |
+
(2)
|
| 254 |
+
Note that n0+. . .+nM = n by definition; so we do not write the superscript
|
| 255 |
+
(n) when using the sufficient statistic to represent the data.
|
| 256 |
+
When making predictions for future data, ideally the action (or predic-
|
| 257 |
+
tion) is close to the actual data realization. For tractability, we consider a
|
| 258 |
+
quadratic terminal opportunity loss function for a single prediction to be of
|
| 259 |
+
the following form:
|
| 260 |
+
ℓ(d, a) = k(d − a)2
|
| 261 |
+
(3)
|
| 262 |
+
where k > 0 is a known constant, a is the action/prediction, and d ∈ [M] is
|
| 263 |
+
the actual data realization.
|
| 264 |
+
To briefly make the above notation more concrete, let’s imagine fore-
|
| 265 |
+
casting product demand for a product that will sell between 0 and 5 units
|
| 266 |
+
(M = 5). Each period’s i.i.d demand, d ∈ {0, 1, . . . , 5}, has an associated
|
| 267 |
+
probability of occurrence, pt(0), pt(1), . . . , pt(5), which is represented more
|
| 268 |
+
compactly as t0, t1, . . . , t5. The effectiveness of any action will be measured
|
| 269 |
+
using quadratic loss scaled by a factor k such that if k = 5, d = 4, and
|
| 270 |
+
a = 1, then ℓ(4, 1) = 45. The decision maker is contemplating the value of
|
| 271 |
+
n = 3 observations where generated data, (x1, x2, x3), might be something
|
| 272 |
+
like (0, 5, 0) and the associated sufficient statistic of counts, (n0, . . . , n5),
|
| 273 |
+
would be (2, 0, 0, 0, 0, 1). Note that t ≡ t0, t1, . . . , t5 parameterizes both the
|
| 274 |
+
data-generating process of eq. (1) yielding (x1, x2, x3) and the equivalent
|
| 275 |
+
sampling process of eq. (2) yielding (n0, . . . , n5). As a result, we refer to t
|
| 276 |
+
as both data distribution and sampling distribution depending on context.
|
| 277 |
+
6
|
| 278 |
+
|
| 279 |
+
2.2
|
| 280 |
+
Preposterior Analysis
|
| 281 |
+
For any data distribution t, define the expectation of loss as:
|
| 282 |
+
R(t, a) = ED|T=t [ℓ(D, a)] =
|
| 283 |
+
M
|
| 284 |
+
�
|
| 285 |
+
d=0
|
| 286 |
+
pt(d)ℓ(d, a).
|
| 287 |
+
(4)
|
| 288 |
+
where R(t, a) is known as the Bayes risk. Since a decision maker (DM) does
|
| 289 |
+
not know the underlying “true” t∗ ∈ T data distribution, the minimum
|
| 290 |
+
Bayes risk, mina R(t∗, a), is likely unachievable.
|
| 291 |
+
For a DM, risk is evaluated on an average basis based on the probability
|
| 292 |
+
distribution the DM places over the simplex T . Without any sample obser-
|
| 293 |
+
vations, this distribution is the prior π over all possible data distributions
|
| 294 |
+
in T . The average risk of taking action a using prior π is
|
| 295 |
+
¯R(π, a) = ET [R(T, a)] ,
|
| 296 |
+
(5)
|
| 297 |
+
with T ∼ π. The Bayes action for π is
|
| 298 |
+
a∗(π) = arg min
|
| 299 |
+
a∈A
|
| 300 |
+
¯R(π, a).
|
| 301 |
+
(6)
|
| 302 |
+
The Bayes risk for π is
|
| 303 |
+
¯R(π, a∗(π)) = min
|
| 304 |
+
a∈A
|
| 305 |
+
¯R(π, a).
|
| 306 |
+
(7)
|
| 307 |
+
Access to a sample X ≡ (X1, . . . , Xn) results in a different decision with
|
| 308 |
+
different risk.
|
| 309 |
+
With sample observations, the DM applies Bayes’ rule to
|
| 310 |
+
update π to πX (the posterior) and calculates the associated optimal Bayes
|
| 311 |
+
action a∗(πX). Since X is unknown prior to actually collecting the sample,
|
| 312 |
+
the Bayes risk for πX is itself a random variable. Hence, we evaluate the
|
| 313 |
+
DM’s prior expectation of loss with sample information over all possible
|
| 314 |
+
samples X,
|
| 315 |
+
EX
|
| 316 |
+
� ¯R(πX, a∗(πX))
|
| 317 |
+
�
|
| 318 |
+
= ET EX|T [R(T, a∗(πX))] ,
|
| 319 |
+
(8)
|
| 320 |
+
with T ∼ π and the right-hand side expression derived by substituting πX
|
| 321 |
+
for π in eq. (5) and applying the law of total expectation.
|
| 322 |
+
Thus, the expected value of a sample of information (EVSI), Vn(π), is the
|
| 323 |
+
difference between the prior expectations of loss with and without sample
|
| 324 |
+
X under prior π:
|
| 325 |
+
Vn(π)
|
| 326 |
+
=
|
| 327 |
+
¯R(π, a∗(π)) − EX
|
| 328 |
+
� ¯R(πX, a∗(πX))
|
| 329 |
+
�
|
| 330 |
+
(9)
|
| 331 |
+
=
|
| 332 |
+
ET [R(T, a∗(π))] − ET EX|T [R(T, a∗(πX))]
|
| 333 |
+
(10)
|
| 334 |
+
7
|
| 335 |
+
|
| 336 |
+
where T ∼ π and eq. (10) follows from eqs. (5) and (8). Proposition 2.1 for-
|
| 337 |
+
malizes our intuition that this expected value of sample information should
|
| 338 |
+
be non-negative.
|
| 339 |
+
Proposition 2.1. Suppose data distribution T ≡ (T0, . . . , TM) is drawn
|
| 340 |
+
from a given prior π. Assume further that a DM is given n samples X ≡
|
| 341 |
+
(X1, . . . , Xn) and updates his/her prior to the posterior πX. Then, under
|
| 342 |
+
quadratic loss, the expected value of these n samples is non-negative, i.e.
|
| 343 |
+
Vn(π) = ET [R(T, a∗(π))] − ET EX|T [R(T, a∗(πX))] ≥ 0.
|
| 344 |
+
(11)
|
| 345 |
+
Proof. See Appendix.
|
| 346 |
+
□
|
| 347 |
+
Because the ordering within the sample X does not matter, the inner ex-
|
| 348 |
+
pectation in (11) is performed over (n0, n1, . . . , nM) ∼ Multinomial(t) con-
|
| 349 |
+
ditioned on T = t where nj is the number of times that j ∈ [M] appears in
|
| 350 |
+
the sample, and the outer expectation is performed over T ∼ π.
|
| 351 |
+
3
|
| 352 |
+
Tractable Valuation of Sample Information
|
| 353 |
+
To arrive at a tractable valuation for (10), we leverage the Dirichlet distri-
|
| 354 |
+
bution as a prior for three reasons: 1) it is a conjugate prior to categori-
|
| 355 |
+
cal/multinomial outcomes, 2) its support is the M-dimensional simplex T ,
|
| 356 |
+
and 3) it has flexibility to model many types of prior information for the de-
|
| 357 |
+
cision maker. With the Dirichlet assumption, the main result of this paper,
|
| 358 |
+
Theorem 3.1, can be presented:
|
| 359 |
+
Theorem 3.1. For data distribution T with support [M] and prior π =
|
| 360 |
+
Dirichlet(α0, α1, . . . , αM), the expected reduction in quadratic loss after ob-
|
| 361 |
+
serving n data samples, also called the expected value of sample information
|
| 362 |
+
(EVSI), is given by:
|
| 363 |
+
Vn(π) =
|
| 364 |
+
kn(c2 − c2
|
| 365 |
+
1)
|
| 366 |
+
(n + α)(1 + α).
|
| 367 |
+
(12)
|
| 368 |
+
where α = �M
|
| 369 |
+
d=0 αd is the precision/concentration parameter of the Dirich-
|
| 370 |
+
let distribution (see [16]) and c1 =
|
| 371 |
+
1
|
| 372 |
+
α
|
| 373 |
+
�M
|
| 374 |
+
d=0 dαd and c2 =
|
| 375 |
+
1
|
| 376 |
+
α
|
| 377 |
+
�M
|
| 378 |
+
d=0 d2αd
|
| 379 |
+
are the first and second moments of the data under the marginal likelihood
|
| 380 |
+
(α1, α2, . . . , αM)/α.
|
| 381 |
+
Proof. See Appendix.
|
| 382 |
+
□
|
| 383 |
+
8
|
| 384 |
+
|
| 385 |
+
Theorem 3.1 gives the expected value of observing an n-trial multinomial
|
| 386 |
+
sample with Dirichlet prior where support of the underlying data-generating
|
| 387 |
+
process is the bounded sequence of integers [M] = {0, 1, . . . , M}. This is a
|
| 388 |
+
natural generalization of valuing an n-trial binomial sample with beta prior
|
| 389 |
+
where support of the underlying data-generating process is restricted such
|
| 390 |
+
that [M] = {0, 1}. With just a slight change of notation, we know from [32]
|
| 391 |
+
that EVSI for the beta-binomial case in closed-form is:
|
| 392 |
+
kn
|
| 393 |
+
n + α0 + α1
|
| 394 |
+
·
|
| 395 |
+
α0α1
|
| 396 |
+
(α0 + α1)2(α0 + α1 + 1)
|
| 397 |
+
(13)
|
| 398 |
+
where π ∼ Beta(α0, α1). Replacing this prior with the equivalent Dirichlet
|
| 399 |
+
parameterization of π ∼ Dirichlet(α0, α1) and using Theorem 3.1 yields an
|
| 400 |
+
identical result:
|
| 401 |
+
Vn(π) =
|
| 402 |
+
kn(c2 − c2
|
| 403 |
+
1)
|
| 404 |
+
(n + α)(1 + α)
|
| 405 |
+
=
|
| 406 |
+
kn
|
| 407 |
+
(n + α0 + α1) ·
|
| 408 |
+
α1
|
| 409 |
+
α0+α1 −
|
| 410 |
+
α2
|
| 411 |
+
1
|
| 412 |
+
(α0+α1)2
|
| 413 |
+
(α0 + α1 + 1)
|
| 414 |
+
=
|
| 415 |
+
kn
|
| 416 |
+
(n + α0 + α1) ·
|
| 417 |
+
α0α1
|
| 418 |
+
(α0 + α1)2(α0 + α1 + 1)
|
| 419 |
+
(14)
|
| 420 |
+
As a direct consequence of Theorem 3.1, when n → ∞, we have an
|
| 421 |
+
expression for the expected value of distribution information (EVDI), as an
|
| 422 |
+
infinite sample gives the data distribution exactly:
|
| 423 |
+
V∞(π) = lim
|
| 424 |
+
n→∞ Vn(π) = k(c2 − c2
|
| 425 |
+
1)
|
| 426 |
+
1 + α
|
| 427 |
+
.
|
| 428 |
+
(15)
|
| 429 |
+
Lastly, we can express the efficiency η of the sample information as a function
|
| 430 |
+
of the number of sample points using the ratio of (12) to (15) as:
|
| 431 |
+
η =
|
| 432 |
+
n
|
| 433 |
+
n + α.
|
| 434 |
+
(16)
|
| 435 |
+
Hence, the percentage of value obtained through sampling is given by the
|
| 436 |
+
ratio of the number of data points n to the sum of the n data points
|
| 437 |
+
and the concentration parameter α of a Dirichlet distribution. This sam-
|
| 438 |
+
pling efficiency calculation directly simplifies to the known formula of the
|
| 439 |
+
beta-binomial case from [34](in our notation):
|
| 440 |
+
η = n/(α0 + α1) where
|
| 441 |
+
π ∼ Beta(α0, α1).
|
| 442 |
+
Again, we make the notation more concrete, by revisiting our forecasting
|
| 443 |
+
product demand example from the end of §2.1. Recall, we have a product
|
| 444 |
+
9
|
| 445 |
+
|
| 446 |
+
that will sell between 0 and 5 units (M = 5) and loss is scaled by k =
|
| 447 |
+
5. The decision maker is contemplating the value of n = 3 observations.
|
| 448 |
+
Introducing a zero-inflated prior π ∼ Dirichlet( 10
|
| 449 |
+
6 , 1
|
| 450 |
+
6, 1
|
| 451 |
+
6, 1
|
| 452 |
+
6, 1
|
| 453 |
+
6, 1
|
| 454 |
+
6) means α =
|
| 455 |
+
15
|
| 456 |
+
6 , c1 =
|
| 457 |
+
6
|
| 458 |
+
15 · (0 · 10
|
| 459 |
+
6 + 1 · 1
|
| 460 |
+
6 + 2 · 1
|
| 461 |
+
6 + 3 · 1
|
| 462 |
+
6 + 4 · 1
|
| 463 |
+
6 + 5 · 1
|
| 464 |
+
6) = 1, c2 =
|
| 465 |
+
6
|
| 466 |
+
15 ·
|
| 467 |
+
(0 · 10
|
| 468 |
+
6 + 1 · 1
|
| 469 |
+
6 + 4 · 1
|
| 470 |
+
6 + 9 · 1
|
| 471 |
+
6 + 16 · 1
|
| 472 |
+
6 + 25 · 1
|
| 473 |
+
6) = 11
|
| 474 |
+
3 . Plugging into eq. (12)
|
| 475 |
+
yields EVSI V3(π) = 160
|
| 476 |
+
77 ≈ 2.08. and EVDI V∞(π) = 80
|
| 477 |
+
21 ≈ 3.81. From
|
| 478 |
+
eq. (16) we get η =
|
| 479 |
+
6
|
| 480 |
+
11 ≈ 54.5%, so the learning from n = 3 samples is
|
| 481 |
+
expected to provide more than half of the maximum possible reduction in
|
| 482 |
+
loss. Following from eqs. (26) - (31), a∗(π) = 1 and prior expected loss
|
| 483 |
+
¯R(π, a∗(π)) = 5 · (−12 · 10
|
| 484 |
+
15 + 02 · 1
|
| 485 |
+
15 + 12 · 1
|
| 486 |
+
15 + 22 · 1
|
| 487 |
+
15 + 32 · 1
|
| 488 |
+
15 + 42 · 1
|
| 489 |
+
15) =
|
| 490 |
+
40
|
| 491 |
+
3 ≈ 13.33. And thus, we can also get the prior expectation of posterior loss
|
| 492 |
+
EX
|
| 493 |
+
� ¯R(πX, a∗(πX))
|
| 494 |
+
�
|
| 495 |
+
= ¯R(π, a∗(π)) − V3(π) = 40
|
| 496 |
+
3 − 160
|
| 497 |
+
77 ≈ 11.26.
|
| 498 |
+
4
|
| 499 |
+
Notes on Richness and Interpretability of Mod-
|
| 500 |
+
elings Assumptions
|
| 501 |
+
In the previous section, we showed one of the three EVSI desiderata, tractabil-
|
| 502 |
+
ity, can be achieved for a multinomial data-generating process with Dirichlet
|
| 503 |
+
prior. The multinomial distribution is flexible enough to model any discrete
|
| 504 |
+
(finite) data distribution. Its prior, the Dirichlet distribution, is also flexible
|
| 505 |
+
in its ability to model a wide range of distributions over a simplex. Yet, some
|
| 506 |
+
sacrifice of richness in modeling prior beliefs is made in the name of tractabil-
|
| 507 |
+
ity. Most notably, a more rich/flexible alternative prior over a simplex is the
|
| 508 |
+
logistic-normal distribution [3, see discussion in]. The most glaring weak-
|
| 509 |
+
ness of the Dirichlet distribution is in modeling prior beliefs where there is
|
| 510 |
+
some type of correlation structure between data observations. For example,
|
| 511 |
+
observing a high data value, say 100, would make one think values of 101
|
| 512 |
+
and 99 are also more likely to occur than data values further away. How-
|
| 513 |
+
ever, the Dirichlet distribution, as a prior distribution to multinomial data,
|
| 514 |
+
is unable to capture this structure. Notably, the distribution-free underpin-
|
| 515 |
+
nings of the Kaplan-Meier estimator also ignore this potential correlation
|
| 516 |
+
among data observations, yet shows favorable results in a similar repeated
|
| 517 |
+
newsvendor setting [17] .
|
| 518 |
+
The richness of the Dirichlet prior is best seen through the lens of its intu-
|
| 519 |
+
itive reparameterization [16]. Let the concentration parameter α = �M
|
| 520 |
+
i=0 αi
|
| 521 |
+
and let the vector m =
|
| 522 |
+
� α0
|
| 523 |
+
α , α1
|
| 524 |
+
α , . . . , αM
|
| 525 |
+
α
|
| 526 |
+
�
|
| 527 |
+
represent the mean where the
|
| 528 |
+
expected mean of the data observations is given as c1 =
|
| 529 |
+
1
|
| 530 |
+
α
|
| 531 |
+
�M
|
| 532 |
+
i=0 iαi =
|
| 533 |
+
�M
|
| 534 |
+
i=0 imi. When α is small, say α ≤ M, the prior distribution over the
|
| 535 |
+
simplex can differ greatly from m and reflect a decision maker’s uncertainty
|
| 536 |
+
10
|
| 537 |
+
|
| 538 |
+
0.00
|
| 539 |
+
0.25
|
| 540 |
+
0.50
|
| 541 |
+
0.75
|
| 542 |
+
α0
|
| 543 |
+
α5
|
| 544 |
+
α10
|
| 545 |
+
α15
|
| 546 |
+
α20
|
| 547 |
+
Dirichlet Parameter
|
| 548 |
+
Parameter Value
|
| 549 |
+
Dirichlet Shape Parameter for M = 20
|
| 550 |
+
0.00
|
| 551 |
+
0.05
|
| 552 |
+
0.10
|
| 553 |
+
0.15
|
| 554 |
+
0.20
|
| 555 |
+
p0
|
| 556 |
+
p5
|
| 557 |
+
p10
|
| 558 |
+
p15
|
| 559 |
+
p20
|
| 560 |
+
Multinomial Parameter
|
| 561 |
+
Parameter Value
|
| 562 |
+
0.00
|
| 563 |
+
0.05
|
| 564 |
+
0.10
|
| 565 |
+
0.15
|
| 566 |
+
0.20
|
| 567 |
+
p0
|
| 568 |
+
p5
|
| 569 |
+
p10
|
| 570 |
+
p15
|
| 571 |
+
p20
|
| 572 |
+
Multinomial Parameter
|
| 573 |
+
Parameter Value
|
| 574 |
+
Sample Realizations of Multinomial Parameters
|
| 575 |
+
EVDI
|
| 576 |
+
0.0
|
| 577 |
+
0.5
|
| 578 |
+
1.0
|
| 579 |
+
1.5
|
| 580 |
+
2.0
|
| 581 |
+
0
|
| 582 |
+
10
|
| 583 |
+
20
|
| 584 |
+
30
|
| 585 |
+
40
|
| 586 |
+
50
|
| 587 |
+
# of samples (n)
|
| 588 |
+
value
|
| 589 |
+
EVSI
|
| 590 |
+
EVSI as Function of n for M = 20
|
| 591 |
+
Concentration Parameter = 10
|
| 592 |
+
0
|
| 593 |
+
1
|
| 594 |
+
2
|
| 595 |
+
3
|
| 596 |
+
4
|
| 597 |
+
α0
|
| 598 |
+
α5
|
| 599 |
+
α10
|
| 600 |
+
α15
|
| 601 |
+
α20
|
| 602 |
+
Dirichlet Parameter
|
| 603 |
+
Parameter Value
|
| 604 |
+
Dirichlet Shape Parameter for M = 20
|
| 605 |
+
0.00
|
| 606 |
+
0.05
|
| 607 |
+
0.10
|
| 608 |
+
p0
|
| 609 |
+
p5
|
| 610 |
+
p10
|
| 611 |
+
p15
|
| 612 |
+
p20
|
| 613 |
+
Multinomial Parameter
|
| 614 |
+
Parameter Value
|
| 615 |
+
0.000
|
| 616 |
+
0.025
|
| 617 |
+
0.050
|
| 618 |
+
0.075
|
| 619 |
+
0.100
|
| 620 |
+
0.125
|
| 621 |
+
p0
|
| 622 |
+
p5
|
| 623 |
+
p10
|
| 624 |
+
p15
|
| 625 |
+
p20
|
| 626 |
+
Multinomial Parameter
|
| 627 |
+
Parameter Value
|
| 628 |
+
Sample Realizations of Multinomial Parameters
|
| 629 |
+
EVDI
|
| 630 |
+
0.0
|
| 631 |
+
0.1
|
| 632 |
+
0.2
|
| 633 |
+
0.3
|
| 634 |
+
0.4
|
| 635 |
+
0
|
| 636 |
+
10
|
| 637 |
+
20
|
| 638 |
+
30
|
| 639 |
+
40
|
| 640 |
+
50
|
| 641 |
+
# of samples (n)
|
| 642 |
+
value
|
| 643 |
+
EVSI
|
| 644 |
+
EVSI as Function of n for M = 20
|
| 645 |
+
Concentration Parameter = 50
|
| 646 |
+
Figure 1:
|
| 647 |
+
Graphical depiction of the Dirichlet prior parameters, poten-
|
| 648 |
+
tial realizations for that prior (i.e. the multinomial parameters), and the
|
| 649 |
+
EVSI/EVDI calculations as a function of n samples for the given prior. Top
|
| 650 |
+
row for concentration parameter α = 10 and bottom row for concentration
|
| 651 |
+
parameter α = 50
|
| 652 |
+
11
|
| 653 |
+
|
| 654 |
+
around their expectation. As α is made larger, the prior distribution will
|
| 655 |
+
concentrate probability density near m and reflect greater confidence. We
|
| 656 |
+
present a graphical overview of this in Figure 1 for two different concentra-
|
| 657 |
+
tion parameters. As seen, when α is smaller (top row of Figure 1) the real-
|
| 658 |
+
ized multinomial parameters (middle-top plot) can be further away from the
|
| 659 |
+
mean m (which is proportional to the parameters in the top-left plot). As α
|
| 660 |
+
increases (bottom-row) the prior distribution becomes much more informa-
|
| 661 |
+
tive and multinomial parameters will most likely mirror the prior Dirichlet
|
| 662 |
+
parameters.
|
| 663 |
+
In terms of interpretability, Theorem 3.1 formalizes our intuition about
|
| 664 |
+
what drives the value of data. Specifically, data is valuable when 1) the
|
| 665 |
+
sample contains a lot of data (high n), 2) the expected variance of the
|
| 666 |
+
data distribution is large (high c2 − c2
|
| 667 |
+
1), and 3) there is a lot of uncertainty
|
| 668 |
+
regarding the true data distribution (α is small). Additionally, the calcu-
|
| 669 |
+
lation for EVDI (eq. 15) gives an interpretable upper bound on the value
|
| 670 |
+
of data where high variance pushes to make samples more valuable and a
|
| 671 |
+
high concentration parameter makes samples less valuable. Lastly, the equa-
|
| 672 |
+
tion for efficiency (16) adds further insight by stating how quickly the upper
|
| 673 |
+
bound on the value of data is approached; basically, the smaller the Dirichlet
|
| 674 |
+
concentration parameter, the more quickly EVDI is approached with each
|
| 675 |
+
subsequent data point.
|
| 676 |
+
5
|
| 677 |
+
Illustrative Examples
|
| 678 |
+
In this section, we demonstrate how the tractable formulation for EVSI,
|
| 679 |
+
equation (12), can serve as a building block inside of other research initia-
|
| 680 |
+
tives. The first example explores sample size optimization and the second
|
| 681 |
+
example shows how a tractable EVSI calculation can lead to a tractable de-
|
| 682 |
+
cision policy in a two-stage production planning problem. In the third/last
|
| 683 |
+
example, the EVSI formula provides a foundation from which to benchmark
|
| 684 |
+
heuristic updating procedures that seek to estimate an underlying unknown
|
| 685 |
+
data distribution.
|
| 686 |
+
5.1
|
| 687 |
+
The Choice of Sample Size
|
| 688 |
+
We now explore a decision maker’s objective to choose the number of sam-
|
| 689 |
+
ple points to collect in such a way as to minimize his expected loss when
|
| 690 |
+
assuming expected sampling cost, Cs(n), is a linear function of the number
|
| 691 |
+
of sampled points n:
|
| 692 |
+
Cs(n) = K + sn
|
| 693 |
+
(17)
|
| 694 |
+
12
|
| 695 |
+
|
| 696 |
+
where s is the cost of one sample and K represents the fixed costs of sam-
|
| 697 |
+
pling.
|
| 698 |
+
The loss function to be minimized, ℓs(n), combines equations (12) and
|
| 699 |
+
(17):
|
| 700 |
+
ℓs(n) = −
|
| 701 |
+
kn(c2 − c2
|
| 702 |
+
1)
|
| 703 |
+
(n + α)(1 + α) + K + sn
|
| 704 |
+
(18)
|
| 705 |
+
And assuming for practical purposes that n can be treated continuously,
|
| 706 |
+
we get the optimal sample size:
|
| 707 |
+
n∗ =
|
| 708 |
+
�
|
| 709 |
+
α
|
| 710 |
+
(1 + α)
|
| 711 |
+
k
|
| 712 |
+
s (c2 − c2
|
| 713 |
+
1) − α
|
| 714 |
+
(19)
|
| 715 |
+
for cases where n∗ is positively valued and the fixed costs of sampling K
|
| 716 |
+
can be recovered, i.e. Vn(π) > Cs(n∗). In all other cases, n∗ = 0. Equation
|
| 717 |
+
(19) has a nice economic interpretation where the three terms represent the
|
| 718 |
+
strength of the prior, the ratio between the scaling of the quadratic loss
|
| 719 |
+
costs and the unit sampling costs, and the predicted variance of the data
|
| 720 |
+
distribution.
|
| 721 |
+
5.2
|
| 722 |
+
Two-Stage Production Planning
|
| 723 |
+
The example shown here is a simple two-stage production planning problem
|
| 724 |
+
(see, e.g., [9]) where the decision maker seeks to optimally schedule the 2nd
|
| 725 |
+
production run.
|
| 726 |
+
Assume J periods make up a selling season. Each period, j ∈ J faces in-
|
| 727 |
+
dependent and identical categorical demand with Dirichlet prior and quadratic
|
| 728 |
+
loss (i.e. a repeated newsvendor setting with quadratic loss) with identical
|
| 729 |
+
shipments scheduled for each period. A decision maker can choose either 1)
|
| 730 |
+
to schedule the delivery quantity for each period in the entire selling season
|
| 731 |
+
or, 2) at cost K can specify a period j∗ after which the scheduled delivery
|
| 732 |
+
quantity can be changed. Assuming this change date will be contractually
|
| 733 |
+
set in advance of the selling season, find j∗ to minimize expected net costs
|
| 734 |
+
over the entire season J.
|
| 735 |
+
The net cost function for this problem is:
|
| 736 |
+
C(j) =
|
| 737 |
+
�
|
| 738 |
+
�
|
| 739 |
+
�
|
| 740 |
+
0,
|
| 741 |
+
if j = 0,
|
| 742 |
+
K − (J − j)
|
| 743 |
+
kj(c2 − c2
|
| 744 |
+
1)
|
| 745 |
+
(j + α)(1 + α),
|
| 746 |
+
if j ∈ (0, J]
|
| 747 |
+
(20)
|
| 748 |
+
13
|
| 749 |
+
|
| 750 |
+
When j ∈ (0, J], the net cost function C(·) is strictly convex and has a
|
| 751 |
+
unique global minimum value. The optimal period j∗ is
|
| 752 |
+
j∗ = arg min
|
| 753 |
+
j∈{0,1,...,J}
|
| 754 |
+
C(j)
|
| 755 |
+
When min C(j) = 0 for 0 < j ≤ J, we choose j∗ = 0.
|
| 756 |
+
For the case when min C(j) < 0, we have
|
| 757 |
+
j∗ =
|
| 758 |
+
�
|
| 759 |
+
α(J + α) − α
|
| 760 |
+
Considering that j∗ must be a non-negative integer, summarizing differ-
|
| 761 |
+
ent cases we have the optimal j∗ as
|
| 762 |
+
j∗ =
|
| 763 |
+
�
|
| 764 |
+
�
|
| 765 |
+
�
|
| 766 |
+
�
|
| 767 |
+
�
|
| 768 |
+
0,
|
| 769 |
+
if min
|
| 770 |
+
j∈[0,J]C(j) = 0,
|
| 771 |
+
arg min
|
| 772 |
+
j∈{⌊j0⌋,⌈j0⌉}
|
| 773 |
+
C(j),
|
| 774 |
+
if min
|
| 775 |
+
j∈[0,J]C(j) < 0.
|
| 776 |
+
(21)
|
| 777 |
+
where j0 =
|
| 778 |
+
�
|
| 779 |
+
α(J + α) − α.
|
| 780 |
+
5.3
|
| 781 |
+
Benchmarking Data-Driven Algorithms
|
| 782 |
+
An active area of research is to propose algorithms for decisions in repeated
|
| 783 |
+
settings where minimal assumptions about the underlying data distribution
|
| 784 |
+
are known. These approaches include Sample Average Approximation(SAA)
|
| 785 |
+
[24, 23], concave adaptive value estimation (CAVE) [12], and Second Order
|
| 786 |
+
Belief Maximum Entropy (SOBME) [35]. When benchmarking these algo-
|
| 787 |
+
rithms, it is customary to pick a handful of “true” distributions where the
|
| 788 |
+
algorithm competes against a known optimal solution.
|
| 789 |
+
With the introduction of a closed-form EVSI calculation in the context
|
| 790 |
+
of a Dirichlet prior, a more robust benchmarking scenario can be achieved.
|
| 791 |
+
Instead of picking a “true” data distribution, we pick a “true prior” from the
|
| 792 |
+
Dirichlet family with support matching the problem of interest. This prior
|
| 793 |
+
can be used to then simulate “true” data distributions (as many as we want)
|
| 794 |
+
by which we can estimate the reduction in squared loss as a function of n,
|
| 795 |
+
the number of data samples. Given this setup, a comparison of a proposed
|
| 796 |
+
algorithm can be made against a known optimal updating procedure. After
|
| 797 |
+
all, it is the updating procedure that we are seeking to validate, and the opti-
|
| 798 |
+
mal updating procedure to benchmark new algorithms against is, therefore,
|
| 799 |
+
the Bayesian one detailed in the proof of Theorem 3.1 (see appendix).
|
| 800 |
+
As a proof of concept, Figure 2 is an example benchmarking the well-
|
| 801 |
+
known sample average approximation (SAA) (see [24]) against the known op-
|
| 802 |
+
timal Bayesian updating procedure (BAYES) using a Dirichlet(α0, α1, . . . , αM)
|
| 803 |
+
14
|
| 804 |
+
|
| 805 |
+
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
|
| 806 |
+
Optimal Squared Loss (i.e. distribution known)
|
| 807 |
+
18
|
| 808 |
+
20
|
| 809 |
+
22
|
| 810 |
+
0
|
| 811 |
+
10
|
| 812 |
+
20
|
| 813 |
+
30
|
| 814 |
+
40
|
| 815 |
+
# of sample data points
|
| 816 |
+
expected quadratic loss
|
| 817 |
+
Updating
|
| 818 |
+
Method
|
| 819 |
+
G BAYES
|
| 820 |
+
SAA
|
| 821 |
+
Expected Loss as Function of n for M = 20
|
| 822 |
+
Figure 2: Comparing the sample average approximation(SAA) updating
|
| 823 |
+
procedure to the known Bayesian (BAYES) optimal updating procedure.
|
| 824 |
+
prior with M = 20, α = 10, and m ∝ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 13, 11, 9, 7, 5, 3, 1}
|
| 825 |
+
(chosen to be slightly skewed). In this scenario, we see the value of prior in-
|
| 826 |
+
formation in small data settings as BAYES outperforms SAA. It also shows
|
| 827 |
+
how as the amount of data increases, the non-parametric SAA algorithm’s
|
| 828 |
+
performance improves and closely mimics that of the optimal Bayesian up-
|
| 829 |
+
dating procedure.
|
| 830 |
+
6
|
| 831 |
+
Conclusion
|
| 832 |
+
The use of preposterior analysis in this paper provides a formal method for
|
| 833 |
+
valuing data prior to its collection and as such, should serve as a build-
|
| 834 |
+
ing block in many systems and models going forward. By expanding the
|
| 835 |
+
support of the underlying data-generating process from [M] = {0, 1} to
|
| 836 |
+
[M] = {0, 1, . . . , M}, the beta-binomial EVSI calculations are successfully
|
| 837 |
+
generalized to a Dirichlet-multinomial setting. Using this new EVSI com-
|
| 838 |
+
putation, three illustrative examples valuing data prior to its collection are
|
| 839 |
+
shown, there are potentially many other contexts where this tractable formu-
|
| 840 |
+
lation might also prove useful. Researchers in two particular areas, medical
|
| 841 |
+
decision making and active (machine) learning are known to be interested in
|
| 842 |
+
EVSI types of calculations (see, e.g., [2, 13, 18, 31]). And we look forward
|
| 843 |
+
to hearing of other useful deployments for this method of valuing data prior
|
| 844 |
+
to its collection.
|
| 845 |
+
15
|
| 846 |
+
|
| 847 |
+
A
|
| 848 |
+
Proof of Proposition 2.1 and Theorem 3.1
|
| 849 |
+
A.1
|
| 850 |
+
Proof of Proposition 2.1
|
| 851 |
+
The expected value of sample information is
|
| 852 |
+
Vn (π) = ET [R(T, a∗(π))] − ET EX|T [R(T, a∗(πX))] .
|
| 853 |
+
(22)
|
| 854 |
+
For the first term in eq. (22), we have
|
| 855 |
+
ET [R(T, a∗(π))] = kET
|
| 856 |
+
�
|
| 857 |
+
ED|T
|
| 858 |
+
�
|
| 859 |
+
(D − a∗ (π))2��
|
| 860 |
+
= kET
|
| 861 |
+
�
|
| 862 |
+
ED|T
|
| 863 |
+
�
|
| 864 |
+
(D − E [D])2��
|
| 865 |
+
= kED
|
| 866 |
+
�
|
| 867 |
+
(D − E [D])2�
|
| 868 |
+
= kVar [D] .
|
| 869 |
+
(23)
|
| 870 |
+
The second line is due to the optimal action under squared loss being the
|
| 871 |
+
mean (see eq. (30)). The third line of equation (23) follows from the law of
|
| 872 |
+
total expectation. Thus, the optimal Bayes risk without sample information
|
| 873 |
+
under quadratic loss (3) is the marginal variance of D scaled by a factor k.
|
| 874 |
+
Similarly, for the second term in eq. (22) we find
|
| 875 |
+
ET
|
| 876 |
+
�
|
| 877 |
+
EX|T [R (T, a∗ (πX))]
|
| 878 |
+
�
|
| 879 |
+
= kET
|
| 880 |
+
�
|
| 881 |
+
EX|T
|
| 882 |
+
�
|
| 883 |
+
ED|T
|
| 884 |
+
�
|
| 885 |
+
(D − a∗ (πX))2���
|
| 886 |
+
= kET
|
| 887 |
+
�
|
| 888 |
+
EX|T
|
| 889 |
+
�
|
| 890 |
+
ED|T
|
| 891 |
+
��
|
| 892 |
+
D − ED|X [D]
|
| 893 |
+
�2���
|
| 894 |
+
= kEX
|
| 895 |
+
�
|
| 896 |
+
ED|X
|
| 897 |
+
��
|
| 898 |
+
D − ED|X [D]
|
| 899 |
+
�2��
|
| 900 |
+
= kEX
|
| 901 |
+
�
|
| 902 |
+
VarD|X [D]
|
| 903 |
+
�
|
| 904 |
+
.
|
| 905 |
+
(24)
|
| 906 |
+
The optimal Bayes risk under quadratic loss (3) if a sample of size n is to
|
| 907 |
+
be collected is the expected variance of the predictive posterior distribution
|
| 908 |
+
of D scaled by a factor k.
|
| 909 |
+
Combining (22),(23), and (24), we complete the proof:
|
| 910 |
+
Vn (π) = ET [R(T, a∗(π))] − ET
|
| 911 |
+
�
|
| 912 |
+
EX|T [R (T, a∗ (πX))]
|
| 913 |
+
�
|
| 914 |
+
= kVar [D] − kEX
|
| 915 |
+
�
|
| 916 |
+
VarD|X [D]
|
| 917 |
+
�
|
| 918 |
+
= k
|
| 919 |
+
�
|
| 920 |
+
Var [D] − EX
|
| 921 |
+
�
|
| 922 |
+
VarD|X [D]
|
| 923 |
+
��
|
| 924 |
+
= kVarX
|
| 925 |
+
�
|
| 926 |
+
ED|X [D]
|
| 927 |
+
�
|
| 928 |
+
≥ 0.
|
| 929 |
+
(25)
|
| 930 |
+
16
|
| 931 |
+
|
| 932 |
+
The last equal sign in equation (25) follows from the law of total variance.
|
| 933 |
+
Since k > 0 and VarX
|
| 934 |
+
�
|
| 935 |
+
ED|X [D]
|
| 936 |
+
�
|
| 937 |
+
≥ 0 for any X, we have Vn (π) ≥ 0 for any
|
| 938 |
+
sample size n.
|
| 939 |
+
□
|
| 940 |
+
A.2
|
| 941 |
+
Proof of Theorem 3.1
|
| 942 |
+
Consider the prior distribution for the data-generating process
|
| 943 |
+
π = Dirichlet(α0, α1, . . . , αM).
|
| 944 |
+
Suppose our information consists of n samples of the data distribution. Let
|
| 945 |
+
nj, j ∈ [M] be the frequency of the data being j so that nj are integers such
|
| 946 |
+
that �M
|
| 947 |
+
j=0 nj = n. Then, because the multinomial and Dirichlet distribu-
|
| 948 |
+
tions are conjugate,
|
| 949 |
+
πX
|
| 950 |
+
=
|
| 951 |
+
Dirichlet(α0 + n0, α1 + n1, . . . , αM + nM).
|
| 952 |
+
Because π and πX both belong to the same class of distributions, we derive
|
| 953 |
+
closed-form valuations for the information X. The corresponding marginal
|
| 954 |
+
likelihoods for π and πX are
|
| 955 |
+
qπ(d)
|
| 956 |
+
=
|
| 957 |
+
αd
|
| 958 |
+
α ,
|
| 959 |
+
qπX(d)
|
| 960 |
+
=
|
| 961 |
+
αd + nd
|
| 962 |
+
α + n ,
|
| 963 |
+
where α = �M
|
| 964 |
+
i=0 αi. If the information happens to occur in such a way
|
| 965 |
+
that nj ∝ αj for each j, then the updated marginal likelihood is unchanged:
|
| 966 |
+
qd(π) = qd(πX), d ∈ [M].
|
| 967 |
+
For convenience, define the quantities
|
| 968 |
+
Z
|
| 969 |
+
=
|
| 970 |
+
1
|
| 971 |
+
n
|
| 972 |
+
M
|
| 973 |
+
�
|
| 974 |
+
d=0
|
| 975 |
+
dnd,
|
| 976 |
+
c1
|
| 977 |
+
=
|
| 978 |
+
1
|
| 979 |
+
α
|
| 980 |
+
M
|
| 981 |
+
�
|
| 982 |
+
d=0
|
| 983 |
+
dαd,
|
| 984 |
+
c2
|
| 985 |
+
=
|
| 986 |
+
1
|
| 987 |
+
α
|
| 988 |
+
M
|
| 989 |
+
�
|
| 990 |
+
d=0
|
| 991 |
+
d2αd,
|
| 992 |
+
where Z represents the average frequency of the sample, c1 the prior expec-
|
| 993 |
+
tation for a sample value, and c2 the prior second moment for the sample
|
| 994 |
+
value.
|
| 995 |
+
17
|
| 996 |
+
|
| 997 |
+
Given the loss function in (3), the Bayes risk and action without sample
|
| 998 |
+
information can be explicitly calculated
|
| 999 |
+
¯R(π, a)
|
| 1000 |
+
=
|
| 1001 |
+
ET∼π[R(T, a)],
|
| 1002 |
+
(26)
|
| 1003 |
+
=
|
| 1004 |
+
ET∼π
|
| 1005 |
+
� M
|
| 1006 |
+
�
|
| 1007 |
+
d=0
|
| 1008 |
+
pT (d)ℓ(d, a)
|
| 1009 |
+
�
|
| 1010 |
+
,
|
| 1011 |
+
(27)
|
| 1012 |
+
=
|
| 1013 |
+
M
|
| 1014 |
+
�
|
| 1015 |
+
d=0
|
| 1016 |
+
ℓ(d, a)ET∼π[pT (d)],
|
| 1017 |
+
(28)
|
| 1018 |
+
=
|
| 1019 |
+
M
|
| 1020 |
+
�
|
| 1021 |
+
d=0
|
| 1022 |
+
ℓ(d, a)qπ(d),
|
| 1023 |
+
(29)
|
| 1024 |
+
where {qπ(0), qπ(1), . . . , qπ(M)} is the marginal likelihood. The Bayes action
|
| 1025 |
+
minimizes eq. (29):
|
| 1026 |
+
∂ ¯R(π, a)
|
| 1027 |
+
∂a
|
| 1028 |
+
=
|
| 1029 |
+
−2k
|
| 1030 |
+
M
|
| 1031 |
+
�
|
| 1032 |
+
d=0
|
| 1033 |
+
(d − a)qπ(d) = −2k
|
| 1034 |
+
� M
|
| 1035 |
+
�
|
| 1036 |
+
d=0
|
| 1037 |
+
dqπ(d) − a
|
| 1038 |
+
M
|
| 1039 |
+
�
|
| 1040 |
+
d=0
|
| 1041 |
+
qπ(d)
|
| 1042 |
+
�
|
| 1043 |
+
= 0,
|
| 1044 |
+
⇒ a∗(π)
|
| 1045 |
+
=
|
| 1046 |
+
M
|
| 1047 |
+
�
|
| 1048 |
+
d=0
|
| 1049 |
+
dqπ(d),
|
| 1050 |
+
=
|
| 1051 |
+
Eqπ[D],
|
| 1052 |
+
(30)
|
| 1053 |
+
=
|
| 1054 |
+
c1.
|
| 1055 |
+
(31)
|
| 1056 |
+
the mean data outcome under the prior marginal likelihood.
|
| 1057 |
+
The corre-
|
| 1058 |
+
sponding Bayes Risk is
|
| 1059 |
+
¯R(π, a∗(π))
|
| 1060 |
+
=
|
| 1061 |
+
k
|
| 1062 |
+
M
|
| 1063 |
+
�
|
| 1064 |
+
d=0
|
| 1065 |
+
(d − a∗(π))2qπ(d),
|
| 1066 |
+
=
|
| 1067 |
+
kVarqπ[D],
|
| 1068 |
+
=
|
| 1069 |
+
k(c2 − c2
|
| 1070 |
+
1).
|
| 1071 |
+
18
|
| 1072 |
+
|
| 1073 |
+
Similarly, with sample information we have
|
| 1074 |
+
∂ ¯R(πX, a)
|
| 1075 |
+
∂a
|
| 1076 |
+
=
|
| 1077 |
+
−2k
|
| 1078 |
+
M
|
| 1079 |
+
�
|
| 1080 |
+
d=0
|
| 1081 |
+
(d − a)qπX(d),
|
| 1082 |
+
=
|
| 1083 |
+
−2k
|
| 1084 |
+
� M
|
| 1085 |
+
�
|
| 1086 |
+
d=0
|
| 1087 |
+
dqπX(d) − a
|
| 1088 |
+
M
|
| 1089 |
+
�
|
| 1090 |
+
d=0
|
| 1091 |
+
qπX(d)
|
| 1092 |
+
�
|
| 1093 |
+
= 0,
|
| 1094 |
+
⇒ a∗(πX)
|
| 1095 |
+
=
|
| 1096 |
+
M
|
| 1097 |
+
�
|
| 1098 |
+
d=0
|
| 1099 |
+
dqπX(d),
|
| 1100 |
+
=
|
| 1101 |
+
EqπX [D],
|
| 1102 |
+
=
|
| 1103 |
+
αc1 + nZ
|
| 1104 |
+
α + n
|
| 1105 |
+
,
|
| 1106 |
+
(32)
|
| 1107 |
+
which is the mean data outcome under the posterior marginal likelihood.
|
| 1108 |
+
Now expressing EVSI as
|
| 1109 |
+
Vn(π)
|
| 1110 |
+
=
|
| 1111 |
+
¯R(π, a∗(π)) − ET EX|T R(T, a∗(πX)),
|
| 1112 |
+
(33)
|
| 1113 |
+
note the inner expectation is taken over the data frequency, which follows a
|
| 1114 |
+
multinomial distribution: (n0, . . . , nM) ∼ Multinomial(pt(0), . . . , pt(M))),
|
| 1115 |
+
and the outer expectation is taken over all possible distributions pt∗ ∼
|
| 1116 |
+
Dir(α0, . . . , αM).
|
| 1117 |
+
The first term in (33) has already been evaluated as k(c2 − c2
|
| 1118 |
+
1). We now
|
| 1119 |
+
calculate the second term.
|
| 1120 |
+
R(t, a∗(πX))
|
| 1121 |
+
=
|
| 1122 |
+
k
|
| 1123 |
+
M
|
| 1124 |
+
�
|
| 1125 |
+
d=0
|
| 1126 |
+
pt(d)(d − a∗(πX))2
|
| 1127 |
+
=
|
| 1128 |
+
k
|
| 1129 |
+
M
|
| 1130 |
+
�
|
| 1131 |
+
d=0
|
| 1132 |
+
pt(d)
|
| 1133 |
+
�
|
| 1134 |
+
d − αc1 + nZ
|
| 1135 |
+
α + n
|
| 1136 |
+
�2
|
| 1137 |
+
⇒ EX|T=t [R(t, a∗(πX))]
|
| 1138 |
+
=
|
| 1139 |
+
k
|
| 1140 |
+
M
|
| 1141 |
+
�
|
| 1142 |
+
d=0
|
| 1143 |
+
pt(d)
|
| 1144 |
+
�
|
| 1145 |
+
d2 −
|
| 1146 |
+
� 2nd
|
| 1147 |
+
α + n −
|
| 1148 |
+
2nαc1
|
| 1149 |
+
(α + n)2
|
| 1150 |
+
�
|
| 1151 |
+
EX[Z]
|
| 1152 |
+
−2dαc1
|
| 1153 |
+
α + n +
|
| 1154 |
+
α2c2
|
| 1155 |
+
1
|
| 1156 |
+
(α + n)2 +
|
| 1157 |
+
n2
|
| 1158 |
+
(α + n)2 EX[Z2]
|
| 1159 |
+
�
|
| 1160 |
+
.(34)
|
| 1161 |
+
19
|
| 1162 |
+
|
| 1163 |
+
Since Z(n0, . . . , nM) = 1
|
| 1164 |
+
n
|
| 1165 |
+
�M
|
| 1166 |
+
d=0 dnd,
|
| 1167 |
+
EX|T=t[Z]
|
| 1168 |
+
=
|
| 1169 |
+
M
|
| 1170 |
+
�
|
| 1171 |
+
d=0
|
| 1172 |
+
dpt(d),
|
| 1173 |
+
EX|T=t[Z2]
|
| 1174 |
+
=
|
| 1175 |
+
VarX|T=t[Z] +
|
| 1176 |
+
�
|
| 1177 |
+
EX|T=t[Z]
|
| 1178 |
+
�2
|
| 1179 |
+
=
|
| 1180 |
+
1
|
| 1181 |
+
n
|
| 1182 |
+
M
|
| 1183 |
+
�
|
| 1184 |
+
d=0
|
| 1185 |
+
d2pt(d) + (n − 1)
|
| 1186 |
+
n
|
| 1187 |
+
� M
|
| 1188 |
+
�
|
| 1189 |
+
d=0
|
| 1190 |
+
dpt(d)
|
| 1191 |
+
�2
|
| 1192 |
+
,
|
| 1193 |
+
where the last line follows from the fact
|
| 1194 |
+
VarX|T=t[Z] = VarX|T=t
|
| 1195 |
+
�
|
| 1196 |
+
1
|
| 1197 |
+
n
|
| 1198 |
+
M
|
| 1199 |
+
�
|
| 1200 |
+
d=0
|
| 1201 |
+
dnd
|
| 1202 |
+
�
|
| 1203 |
+
= 1
|
| 1204 |
+
n2 VarX|T=t
|
| 1205 |
+
� M
|
| 1206 |
+
�
|
| 1207 |
+
d=0
|
| 1208 |
+
dnd
|
| 1209 |
+
�
|
| 1210 |
+
= 1
|
| 1211 |
+
n2
|
| 1212 |
+
�
|
| 1213 |
+
�
|
| 1214 |
+
�
|
| 1215 |
+
M
|
| 1216 |
+
�
|
| 1217 |
+
d=0
|
| 1218 |
+
d2VarX|T=t [nd] + 2
|
| 1219 |
+
M
|
| 1220 |
+
�
|
| 1221 |
+
0=i<j
|
| 1222 |
+
ijCovX|T=t (ni, nj)
|
| 1223 |
+
�
|
| 1224 |
+
�
|
| 1225 |
+
�
|
| 1226 |
+
= 1
|
| 1227 |
+
n2
|
| 1228 |
+
�
|
| 1229 |
+
�
|
| 1230 |
+
�
|
| 1231 |
+
M
|
| 1232 |
+
�
|
| 1233 |
+
d=0
|
| 1234 |
+
d2npt(d) (1 − pt(d)) − 2
|
| 1235 |
+
M
|
| 1236 |
+
�
|
| 1237 |
+
0=i<j
|
| 1238 |
+
ijnpt(i)pt(j)
|
| 1239 |
+
�
|
| 1240 |
+
�
|
| 1241 |
+
�
|
| 1242 |
+
= 1
|
| 1243 |
+
n
|
| 1244 |
+
M
|
| 1245 |
+
�
|
| 1246 |
+
d=0
|
| 1247 |
+
d2pt(d) − 1
|
| 1248 |
+
n
|
| 1249 |
+
�
|
| 1250 |
+
�
|
| 1251 |
+
�
|
| 1252 |
+
M
|
| 1253 |
+
�
|
| 1254 |
+
d=0
|
| 1255 |
+
d2p2
|
| 1256 |
+
t (d) + 2
|
| 1257 |
+
M
|
| 1258 |
+
�
|
| 1259 |
+
0=i<j
|
| 1260 |
+
ijpt(i)pt(j)
|
| 1261 |
+
�
|
| 1262 |
+
�
|
| 1263 |
+
�
|
| 1264 |
+
= 1
|
| 1265 |
+
n
|
| 1266 |
+
M
|
| 1267 |
+
�
|
| 1268 |
+
d=0
|
| 1269 |
+
d2pt(d) − 1
|
| 1270 |
+
n
|
| 1271 |
+
� M
|
| 1272 |
+
�
|
| 1273 |
+
d=0
|
| 1274 |
+
dpt(d)
|
| 1275 |
+
�2
|
| 1276 |
+
.
|
| 1277 |
+
(35)
|
| 1278 |
+
Eq. (34) becomes
|
| 1279 |
+
EX|T=t[R(t, a∗(πX))]
|
| 1280 |
+
=
|
| 1281 |
+
k
|
| 1282 |
+
��
|
| 1283 |
+
1 +
|
| 1284 |
+
n
|
| 1285 |
+
(α + n)2
|
| 1286 |
+
� M
|
| 1287 |
+
�
|
| 1288 |
+
d=0
|
| 1289 |
+
d2pt(d) +
|
| 1290 |
+
� 2αnc1
|
| 1291 |
+
(α + n)2 − 2αc1
|
| 1292 |
+
α + n
|
| 1293 |
+
� M
|
| 1294 |
+
�
|
| 1295 |
+
d=0
|
| 1296 |
+
dpt(d)
|
| 1297 |
+
+
|
| 1298 |
+
�n(n − 1)
|
| 1299 |
+
(α + n)2 −
|
| 1300 |
+
2n
|
| 1301 |
+
α + n
|
| 1302 |
+
� � M
|
| 1303 |
+
�
|
| 1304 |
+
d=0
|
| 1305 |
+
dpt(d)
|
| 1306 |
+
�2
|
| 1307 |
+
+
|
| 1308 |
+
α2c2
|
| 1309 |
+
1
|
| 1310 |
+
(α + n)2
|
| 1311 |
+
�
|
| 1312 |
+
�
|
| 1313 |
+
� .
|
| 1314 |
+
The final step is to take the expectation over all possible beliefs pt ∼
|
| 1315 |
+
20
|
| 1316 |
+
|
| 1317 |
+
Dirichlet(α0, . . . , αM). Using the fact that
|
| 1318 |
+
ET∼π[pT (i)]
|
| 1319 |
+
=
|
| 1320 |
+
αi
|
| 1321 |
+
α ,
|
| 1322 |
+
ET∼π[pT (i)2]
|
| 1323 |
+
=
|
| 1324 |
+
Var[pT (i)] + ET [pT (i)]2
|
| 1325 |
+
=
|
| 1326 |
+
αi(α − αi)
|
| 1327 |
+
α2(α + 1) + α2
|
| 1328 |
+
i
|
| 1329 |
+
α2
|
| 1330 |
+
=
|
| 1331 |
+
αi(αi + 1)
|
| 1332 |
+
α(α + 1) ,
|
| 1333 |
+
ET∼π[pT (i)pT (j)]
|
| 1334 |
+
=
|
| 1335 |
+
Cov[pT (i), pT (j)] + ET [pT (i)]ET [pT (j)],
|
| 1336 |
+
i ̸= j,
|
| 1337 |
+
=
|
| 1338 |
+
−
|
| 1339 |
+
αiαj
|
| 1340 |
+
α2(α + 1) + αiαj
|
| 1341 |
+
α2
|
| 1342 |
+
=
|
| 1343 |
+
αiαj
|
| 1344 |
+
α(α + 1),
|
| 1345 |
+
and
|
| 1346 |
+
ET∼π
|
| 1347 |
+
�
|
| 1348 |
+
�
|
| 1349 |
+
� M
|
| 1350 |
+
�
|
| 1351 |
+
d=0
|
| 1352 |
+
dpT (d)
|
| 1353 |
+
�2�
|
| 1354 |
+
� = ET∼π
|
| 1355 |
+
�
|
| 1356 |
+
�
|
| 1357 |
+
M
|
| 1358 |
+
�
|
| 1359 |
+
d=0
|
| 1360 |
+
d2p2
|
| 1361 |
+
T (d) + 2
|
| 1362 |
+
M
|
| 1363 |
+
�
|
| 1364 |
+
0=i<j
|
| 1365 |
+
ijpT (i)pT (j)
|
| 1366 |
+
�
|
| 1367 |
+
�
|
| 1368 |
+
=
|
| 1369 |
+
M
|
| 1370 |
+
�
|
| 1371 |
+
d=0
|
| 1372 |
+
d2ET∼π
|
| 1373 |
+
�
|
| 1374 |
+
p2
|
| 1375 |
+
T (d)
|
| 1376 |
+
�
|
| 1377 |
+
+ 2
|
| 1378 |
+
M
|
| 1379 |
+
�
|
| 1380 |
+
0=i<j
|
| 1381 |
+
ijET∼π [pT (i)pT (j)]
|
| 1382 |
+
=
|
| 1383 |
+
M
|
| 1384 |
+
�
|
| 1385 |
+
d=0
|
| 1386 |
+
d2 αd (αd + 1)
|
| 1387 |
+
α (α + 1) + 2
|
| 1388 |
+
M
|
| 1389 |
+
�
|
| 1390 |
+
0=i<j
|
| 1391 |
+
ij
|
| 1392 |
+
αiαj
|
| 1393 |
+
α (α + 1)
|
| 1394 |
+
=
|
| 1395 |
+
1
|
| 1396 |
+
α + 1
|
| 1397 |
+
M
|
| 1398 |
+
�
|
| 1399 |
+
d=0
|
| 1400 |
+
d2αd
|
| 1401 |
+
α
|
| 1402 |
+
+
|
| 1403 |
+
1
|
| 1404 |
+
α (α + 1)
|
| 1405 |
+
�
|
| 1406 |
+
�
|
| 1407 |
+
M
|
| 1408 |
+
�
|
| 1409 |
+
d=0
|
| 1410 |
+
d2α2
|
| 1411 |
+
d + 2
|
| 1412 |
+
M
|
| 1413 |
+
�
|
| 1414 |
+
0=i<j
|
| 1415 |
+
ijαiαj
|
| 1416 |
+
�
|
| 1417 |
+
�
|
| 1418 |
+
=
|
| 1419 |
+
c2
|
| 1420 |
+
α + 1 +
|
| 1421 |
+
1
|
| 1422 |
+
α (α + 1)
|
| 1423 |
+
� M
|
| 1424 |
+
�
|
| 1425 |
+
d=0
|
| 1426 |
+
dαd
|
| 1427 |
+
�2
|
| 1428 |
+
=
|
| 1429 |
+
c2
|
| 1430 |
+
α + 1 + αc2
|
| 1431 |
+
1
|
| 1432 |
+
α + 1,
|
| 1433 |
+
(36)
|
| 1434 |
+
21
|
| 1435 |
+
|
| 1436 |
+
we obtain
|
| 1437 |
+
ET EX|T [R(T, a∗(πX))]
|
| 1438 |
+
=
|
| 1439 |
+
k
|
| 1440 |
+
��
|
| 1441 |
+
1 +
|
| 1442 |
+
n
|
| 1443 |
+
(α + n)2
|
| 1444 |
+
� M
|
| 1445 |
+
�
|
| 1446 |
+
d=0
|
| 1447 |
+
d2αd
|
| 1448 |
+
α
|
| 1449 |
+
+
|
| 1450 |
+
� 2αnc1
|
| 1451 |
+
(α + n)2 − 2αc1
|
| 1452 |
+
α + n
|
| 1453 |
+
� M
|
| 1454 |
+
�
|
| 1455 |
+
d=0
|
| 1456 |
+
dαd
|
| 1457 |
+
α
|
| 1458 |
+
+
|
| 1459 |
+
�n(n − 1)
|
| 1460 |
+
(α + n)2 −
|
| 1461 |
+
2n
|
| 1462 |
+
α + n
|
| 1463 |
+
� �
|
| 1464 |
+
c2
|
| 1465 |
+
α + 1 + αc2
|
| 1466 |
+
1
|
| 1467 |
+
α + 1
|
| 1468 |
+
�
|
| 1469 |
+
+
|
| 1470 |
+
α2c2
|
| 1471 |
+
1
|
| 1472 |
+
(α + n)2
|
| 1473 |
+
�
|
| 1474 |
+
=
|
| 1475 |
+
k(c2 − c2
|
| 1476 |
+
1) α(1 + α + n)
|
| 1477 |
+
(1 + α)(n + α).
|
| 1478 |
+
The value of n samples from the data distribution is therefore
|
| 1479 |
+
Vn(π)
|
| 1480 |
+
=
|
| 1481 |
+
k(c2 − c2
|
| 1482 |
+
1) − k(c2 − c2
|
| 1483 |
+
1) α(1 + α + n)
|
| 1484 |
+
(1 + α)(n + α)
|
| 1485 |
+
=
|
| 1486 |
+
kn(c2 − c2
|
| 1487 |
+
1)
|
| 1488 |
+
(n + α)(1 + α).
|
| 1489 |
+
(37)
|
| 1490 |
+
□
|
| 1491 |
+
References
|
| 1492 |
+
[1] C. J. Adcock. An improved bayesian procedure for calculating sample
|
| 1493 |
+
sizes in multinomial sampling. Journal of the Royal Statistical Society.
|
| 1494 |
+
Series D (The Statistician), 42(2):91–95, 1993.
|
| 1495 |
+
[2] AE Ades, G Lu, and K Claxton. Expected value of sample information
|
| 1496 |
+
calculations in medical decision modeling. Medical Decision Making,
|
| 1497 |
+
24(2):207–227, 2004.
|
| 1498 |
+
[3] J. Aitchison and S. M. Shen. Logistic-normal distributions: Some prop-
|
| 1499 |
+
erties and uses. Biometrika, 67(2):261–272, 1980.
|
| 1500 |
+
[4] Debarun Bhattacharjya, Jo Eidsvik, and Tapan Mukerji. The value of
|
| 1501 |
+
information in portfolio problems with dependent projects. Decision
|
| 1502 |
+
Analysis, 10(4):341–351, 2013.
|
| 1503 |
+
[5] J Eric Bickel. The relationship between perfect and imperfect informa-
|
| 1504 |
+
tion in a two-action risk-sensitive problem. Decision Analysis, 5(3):116–
|
| 1505 |
+
128, 2008.
|
| 1506 |
+
[6] Jing Cao, J Jack Lee, and Susan Alber. Comparison of bayesian sample
|
| 1507 |
+
size criteria: Acc, alc, and woc.
|
| 1508 |
+
Journal of statistical planning and
|
| 1509 |
+
inference, 139(12):4111–4122, 2009.
|
| 1510 |
+
22
|
| 1511 |
+
|
| 1512 |
+
[7] Li Chen and Erica L Plambeck.
|
| 1513 |
+
Dynamic inventory management
|
| 1514 |
+
with learning about the demand distribution and substitution probabil-
|
| 1515 |
+
ity. Manufacturing & Service Operations Management, 10(2):236–256,
|
| 1516 |
+
2008.
|
| 1517 |
+
[8] Gary D. Eppen and Ananth. V. Iyer. Improved fashion buying with
|
| 1518 |
+
bayesian updates. Operations Research, 45(6):805–819, 1997.
|
| 1519 |
+
[9] Marshall Fisher, Kumar Rajaram, and Ananth Raman.
|
| 1520 |
+
Optimizing
|
| 1521 |
+
inventory replenishment of retail fashion products. Manufacturing &
|
| 1522 |
+
Service Operations Management, 3(3):230–241, 2001.
|
| 1523 |
+
[10] Adam J. Fleischhacker and Pak-Wing Fok. An entropy-based method-
|
| 1524 |
+
ology for valuation of demand uncertainty reduction. Decision Sciences,
|
| 1525 |
+
46(6):1165–1198, 2015.
|
| 1526 |
+
[11] Yigal Gerchak and David Mossman.
|
| 1527 |
+
On the effect of demand ran-
|
| 1528 |
+
domness on inventories and costs. Operations Research, 40(4):804–807,
|
| 1529 |
+
1992.
|
| 1530 |
+
[12] Gregory A. Godfrey and Warren B. Powell. An adaptive, distribution-
|
| 1531 |
+
free algorithm for the newsvendor problem with censored demands,
|
| 1532 |
+
with applications to inventory and distribution. Management Science,
|
| 1533 |
+
47:1101–1112, 2001.
|
| 1534 |
+
[13] Robbie A Haertel, Kevin D Seppi, Eric K Ringger, and James L Carroll.
|
| 1535 |
+
Return on investment for active learning. In Proceedings of the NIPS
|
| 1536 |
+
Workshop on Cost-Sensitive Learning, volume 72, 2008.
|
| 1537 |
+
[14] Ronald W Hilton. The determinants of information value: Synthesizing
|
| 1538 |
+
some general results. Management Science, 27(1):57–64, 1981.
|
| 1539 |
+
[15] Ronald A Howard. Decision analysis: Perspectives on inference, de-
|
| 1540 |
+
cision, and experimentation. Proceedings of the IEEE, 58(5):632–643,
|
| 1541 |
+
1970.
|
| 1542 |
+
[16] Jonathan Huang.
|
| 1543 |
+
Maximum likelihood estimation of dirichlet dis-
|
| 1544 |
+
tribution parameters.
|
| 1545 |
+
CMU Technique Report, 2005.
|
| 1546 |
+
http://
|
| 1547 |
+
jonathan-huang.org/research/dirichlet/dirichlet.pdfaccessed
|
| 1548 |
+
on 2017-09-19.
|
| 1549 |
+
[17] Woonghee Tim Huh,
|
| 1550 |
+
Retsef Levi,
|
| 1551 |
+
Paat Rusmevichientong,
|
| 1552 |
+
and
|
| 1553 |
+
James B. Orlin.
|
| 1554 |
+
Adaptive data-driven inventory control with cen-
|
| 1555 |
+
23
|
| 1556 |
+
|
| 1557 |
+
sored demand based on kaplan-meier estimator. Operations Research,
|
| 1558 |
+
59(4):929–941, 2011.
|
| 1559 |
+
[18] Christopher Jackson, Anne Presanis, Stefano Conti, and Daniela
|
| 1560 |
+
De Angelis.
|
| 1561 |
+
Value of information: Sensitivity analysis and research
|
| 1562 |
+
design in bayesian evidence synthesis. Journal of the American Statis-
|
| 1563 |
+
tical Association, 2019.
|
| 1564 |
+
[19] Hawre Jalal and Fernando Alarid-Escudero. A gaussian approximation
|
| 1565 |
+
approach for value of information analysis. Medical Decision Making,
|
| 1566 |
+
38(2):174–188, 2018.
|
| 1567 |
+
[20] Jeffrey Keisler. Comparative static analysis of information value in a
|
| 1568 |
+
canonical decision problem.
|
| 1569 |
+
The Engineering Economist, 49(4):339–
|
| 1570 |
+
349, 2004.
|
| 1571 |
+
[21] Jeffrey M Keisler. Additivity of information value in two-act linear loss
|
| 1572 |
+
decisions with normal priors. Risk Analysis: An International Journal,
|
| 1573 |
+
25(2):351–359, 2005.
|
| 1574 |
+
[22] Jin Kyung Kwak and Srinagesh Gavirneni. Retailer policy, uncertainty
|
| 1575 |
+
reduction, and supply chain performance. International Journal of Pro-
|
| 1576 |
+
duction Economics, 132(2):271 – 278, 2011.
|
| 1577 |
+
[23] Retsef Levi, Georgia Perakis, and Joline Uichanco. The data-driven
|
| 1578 |
+
newsvendor problem: new bounds and insights. Operations Research,
|
| 1579 |
+
63(6):1294–1306, 2015.
|
| 1580 |
+
[24] Retsef Levi, Robin O. Roundy, and David B. Shmoys.
|
| 1581 |
+
Provably
|
| 1582 |
+
Near-Optimal Sampling-Based Policies for Stochastic Inventory Control
|
| 1583 |
+
Models. Mathematics of Operations Research, 32(4):821–839, 2007.
|
| 1584 |
+
[25] Chi-Yuan Lin. Numerical techniques for evaluating sample information.
|
| 1585 |
+
Technometrics, 16(3):447–454, 1974.
|
| 1586 |
+
[26] Dennis V Lindley.
|
| 1587 |
+
The choice of sample size.
|
| 1588 |
+
Journal of the Royal
|
| 1589 |
+
Statistical Society: Series D (The Statistician), 46(2):129–138, 1997.
|
| 1590 |
+
[27] Abraham Mehrez. The effect of risk aversion on the expected value of
|
| 1591 |
+
perfect information. Operations research, 33(2):455–458, 1985.
|
| 1592 |
+
[28] Avraham Mehrez and Alan Stulman. Some aspects of the distributional
|
| 1593 |
+
properties of the expected value of perfect information (evpi). Journal
|
| 1594 |
+
of the Operational Research Society, 33(9):827–836, 1982.
|
| 1595 |
+
24
|
| 1596 |
+
|
| 1597 |
+
[29] Joseph M. Milner and Panos Kouvelis. Order quantity and timing flexi-
|
| 1598 |
+
bility in supply chains: The role of demand characteristics. Management
|
| 1599 |
+
Science, 51(6):970–985, 2005.
|
| 1600 |
+
[30] Satoshi Morita, Peter F Thall, and Peter M¨uller.
|
| 1601 |
+
Determining the
|
| 1602 |
+
effective sample size of a parametric prior. Biometrics, 64(2):595–602,
|
| 1603 |
+
2008.
|
| 1604 |
+
[31] Jeremy Muesing, Nisar Ahmed, Luke Burks, Michael Iuzzolino, and
|
| 1605 |
+
Danielle Albers Szafir. Fully bayesian human–machine data fusion for
|
| 1606 |
+
robust online dynamic target characterization. Journal of Aerospace
|
| 1607 |
+
Information Systems, 18(2):26–49, 2021.
|
| 1608 |
+
[32] T. Pham-Gia and N. Turkkan. Sample size determination in bayesian
|
| 1609 |
+
analysis. Journal of the Royal Statistical Society. Series D (The Statis-
|
| 1610 |
+
tician), 41(4):389–397, 1992.
|
| 1611 |
+
[33] Howard Raiffa and Stephen E Fienberg. The early statistical years:
|
| 1612 |
+
1947-1967 a conversation with howard raiffa. Statistical Science, pages
|
| 1613 |
+
136–149, 2008.
|
| 1614 |
+
[34] Howard Raiffa and Robert Schlaifer. Applied Statistical Decision The-
|
| 1615 |
+
ory. MIT press, Cambridge, Massachusetts, 1961.
|
| 1616 |
+
[35] Soroush Saghafian and Brian Tomlin. The newsvendor under demand
|
| 1617 |
+
ambiguity: Combining data with moment and tail information. Oper-
|
| 1618 |
+
ations Research, 64(1):167–185, 2016.
|
| 1619 |
+
[36] Mark Strong, Jeremy E. Oakley, Alan Brennan, and Penny Breeze. Es-
|
| 1620 |
+
timating the expected value of sample information using the probabilis-
|
| 1621 |
+
tic sensitivity analysis sample: A fast, nonparametric regression-based
|
| 1622 |
+
method. Medical Decision Making, 35(5):570–583, 2015.
|
| 1623 |
+
[37] Zhiyuan Wang,
|
| 1624 |
+
Zhiqiang Zheng,
|
| 1625 |
+
Wei Jiang,
|
| 1626 |
+
and Shaojie Tang.
|
| 1627 |
+
Blockchain-enabled data sharing in supply chains:
|
| 1628 |
+
Model, opera-
|
| 1629 |
+
tionalization, and tutorial. Production and Operations Management,
|
| 1630 |
+
30(7):1965–1985, 2021.
|
| 1631 |
+
[38] Wei Xiang and Wenxing Zhou.
|
| 1632 |
+
Optimal sample size determination
|
| 1633 |
+
based on bayesian reliability and value of information. In 13th Interna-
|
| 1634 |
+
tional Conference on Applications of Statistics and Probability in Civil
|
| 1635 |
+
Engineering, 2019.
|
| 1636 |
+
25
|
| 1637 |
+
|
| 1638 |
+
[39] Phillip M Yelland. Bayesian forecasting for low-count time series using
|
| 1639 |
+
state-space models: An empirical evaluation for inventory management.
|
| 1640 |
+
International Journal of Production Economics, 118(1):95–103, 2009.
|
| 1641 |
+
[40] Jinfeng Yue, Bintong Chen, and Min-Chiang Wang. Expected value
|
| 1642 |
+
of distribution information for the newsvendor problem.
|
| 1643 |
+
Operations
|
| 1644 |
+
Research, 54(6):1128–1136, 2006.
|
| 1645 |
+
26
|
| 1646 |
+
|
0NAyT4oBgHgl3EQf0_na/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
19E3T4oBgHgl3EQfngq-/content/tmp_files/2301.04626v1.pdf.txt
ADDED
|
@@ -0,0 +1,1251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Deep Axial Hypercomplex Networks
|
| 2 |
+
Nazmul Shahadat, Anthony S. Maida
|
| 3 |
+
University of Louisiana at Lafayette
|
| 4 |
+
Lafayette LA 70504, USA
|
| 5 |
+
nazmul.ruet@gmail.com, maida@louisiana.edu
|
| 6 |
+
Abstract
|
| 7 |
+
Over the past decade, deep hypercomplex-inspired
|
| 8 |
+
networks have enhanced feature extraction for image
|
| 9 |
+
classification by enabling weight sharing across input
|
| 10 |
+
channels.
|
| 11 |
+
Recent works make it possible to improve
|
| 12 |
+
representational capabilities by using hypercomplex-
|
| 13 |
+
inspired networks which consume high computational
|
| 14 |
+
costs.
|
| 15 |
+
This paper reduces this cost by factorizing
|
| 16 |
+
a quaternion 2D convolutional module into two con-
|
| 17 |
+
secutive vectormap 1D convolutional modules.
|
| 18 |
+
Also,
|
| 19 |
+
we use 5D parameterized hypercomplex multiplication
|
| 20 |
+
based fully connected layers. Incorporating both yields
|
| 21 |
+
our proposed hypercomplex network, a novel architec-
|
| 22 |
+
ture that can be assembled to construct deep axial-
|
| 23 |
+
hypercomplex networks (DANs) for image classifica-
|
| 24 |
+
tions. We conduct experiments on CIFAR benchmarks,
|
| 25 |
+
SVHN, and Tiny ImageNet datasets and achieve bet-
|
| 26 |
+
ter performance with fewer trainable parameters and
|
| 27 |
+
FLOPS. Our proposed model achieves almost 2% higher
|
| 28 |
+
performance for CIFAR and SVHN datasets, and more
|
| 29 |
+
than 3% for the ImageNet-Tiny dataset and takes six
|
| 30 |
+
times fewer parameters than the real-valued ResNets.
|
| 31 |
+
Also, it shows state-of-the-art performance on CIFAR
|
| 32 |
+
benchmarks in hypercomplex space.
|
| 33 |
+
1. Introduction
|
| 34 |
+
Convolutional neural networks (CNNs) and hyper-
|
| 35 |
+
complex CNNs (HCNNs) for image classification form
|
| 36 |
+
a hierarchical design where different layers extract dif-
|
| 37 |
+
ferent levels of feature representation.
|
| 38 |
+
CNNs have
|
| 39 |
+
shown significant success in recent decades [2, 9]. In
|
| 40 |
+
vision tasks, these CNN-based feature extraction de-
|
| 41 |
+
signs can be improved in regard to working with multi-
|
| 42 |
+
dimensional data. To enhance the CNNs ability, HC-
|
| 43 |
+
NNs have been used which treat the multi-dimensional
|
| 44 |
+
data as a cohesive entity by applying cross-channel
|
| 45 |
+
weight sharing to discover cross-channel relationships
|
| 46 |
+
[4, 5, 14, 15]. Also, implementations in hypercomplex
|
| 47 |
+
space provide more advantages [1, 3, 7, 13]. It has also
|
| 48 |
+
been shown that the HCNNs could create better output
|
| 49 |
+
representations [13,16,17].
|
| 50 |
+
Recently, HCNNs with various dimensions like 2D
|
| 51 |
+
HCNNs [21], 4D HCNNs [4, 14, 15], 8D HCNNs [20],
|
| 52 |
+
or generalized HCNNs [5], have been studied and have
|
| 53 |
+
hypercomplex properties. The reason behind the suc-
|
| 54 |
+
cess of HCNNs is that they capture the cross-channel
|
| 55 |
+
relationships [4,5,14,15,17]. Among them, quaternion
|
| 56 |
+
networks have a set of algebra operations and they have
|
| 57 |
+
outperformed than the other HCNNs. Stacking quater-
|
| 58 |
+
nion convolutional coherent layers have achieved better
|
| 59 |
+
representational feature maps and have shown promis-
|
| 60 |
+
ing results in vision tasks [4,15,17]. These networks are
|
| 61 |
+
cost-effective compared to real-valued CNNs and fully
|
| 62 |
+
connected networks. But still, they are very expensive
|
| 63 |
+
for large inputs like vision tasks.
|
| 64 |
+
This work uses an axial hypercomplex network that:
|
| 65 |
+
1) handles multidimensional inputs; 2) applies weight
|
| 66 |
+
sharing across input channels; 3) captures cross-channel
|
| 67 |
+
correlations; 4) reduces computational costs; and 5) in-
|
| 68 |
+
creases validation accuracy performance for image clas-
|
| 69 |
+
sification datasets.
|
| 70 |
+
The main idea of this work is to
|
| 71 |
+
decompose hypercomplex 2D convolutional operation
|
| 72 |
+
into two consecutive vectormap 1D convolutional oper-
|
| 73 |
+
ations . By splitting 2D spatial convolution operation
|
| 74 |
+
into height-axis and width-axis spatial convolution, it
|
| 75 |
+
enables the model to reduce cost once again. Addition-
|
| 76 |
+
ally, we apply a quaternion-based stem layer, and pa-
|
| 77 |
+
rameterized hypercomplex multiplication (PHM) based
|
| 78 |
+
fully connected layer to get better representation and
|
| 79 |
+
better generalization performance.
|
| 80 |
+
This paper conducts extensive experiments that show
|
| 81 |
+
the effectiveness of our novel axial hypercomplex net-
|
| 82 |
+
works on four image classification datasets. Our novel
|
| 83 |
+
contribution is a new model that factorizes the two-
|
| 84 |
+
dimensional spatial hypercomplex convolutional oper-
|
| 85 |
+
ation into two one-dimensional operations along the
|
| 86 |
+
height-axis and width-axis sequentially. Our contribu-
|
| 87 |
+
tions are:
|
| 88 |
+
• Replacing the spatial 3 × 3 QCNN in the bottle-
|
| 89 |
+
arXiv:2301.04626v1 [cs.CV] 11 Jan 2023
|
| 90 |
+
|
| 91 |
+
Figure 1. Proposed axial-hypercomplex network with PHM-based fully-connected layer in backend. “AHNN” stands for axial-
|
| 92 |
+
hypercomplex neural network bottleneck block which is described in Figure 2. Here, Qin = Qr + Qw + Qx + Qy + Qz,
|
| 93 |
+
H = Hr + Hw + Hx + Hy + Hz, and Qout = Qro + Qwo + Qxo + Qyo + Qzo are the input, hypercomplex parameterized
|
| 94 |
+
weight, and output, respectively. For the calculation of H see the “PHM Layer” section.
|
| 95 |
+
neck block of quaternion ResNets using two VC-
|
| 96 |
+
NNs and showing the effectiveness of the proposed
|
| 97 |
+
networks.
|
| 98 |
+
• Applying QCNN in the stem layer (the first layer of
|
| 99 |
+
the network), resulting in a quaternion-stem model.
|
| 100 |
+
• Like QPHM [16], applying PHM-based dense layer
|
| 101 |
+
in the backend of the network.
|
| 102 |
+
This proposed axial hypercomplex ResNets outper-
|
| 103 |
+
formed the baseline networks for classification datasets
|
| 104 |
+
which is shown in Tables 2, 3, and 4. Our experiments
|
| 105 |
+
show that the proposed model achieves state-of-the-art
|
| 106 |
+
results with far fewer trainable parameters, and FLOPS
|
| 107 |
+
for CIFAR benchmarks in hypercomplex space.
|
| 108 |
+
2. Background and Related Work
|
| 109 |
+
2.1. Quaternion Convolution
|
| 110 |
+
The deep quaternion CNN extends of complex CNNs
|
| 111 |
+
[18]. This section explains cross channel weight sharing.
|
| 112 |
+
[4] and [15] extended the principles of quaternion con-
|
| 113 |
+
volution operations, and weight initialization. Quater-
|
| 114 |
+
nion number system is formed as, Q = r + ix + jy +
|
| 115 |
+
kz ; r, x, y, z ∈ R where, r, x, y, and z are real values
|
| 116 |
+
and i, j, and k are imaginary. Quaternion convolution
|
| 117 |
+
between quaternion filter matrix F and quaternion input
|
| 118 |
+
vector M, is defined as [4]:
|
| 119 |
+
M ⊛ F = (Or, Oi, Oj, Ok)
|
| 120 |
+
= (Mr ∗ Fr − Mi ∗ Fi − Mj ∗ Fj − Mk ∗ Fk,
|
| 121 |
+
Mi ∗ Fr + Mr ∗ Fi + Mj ∗ Fk − Mk ∗ Fj,
|
| 122 |
+
Mj ∗ Fr + Mr ∗ Fj + Mk ∗ Fi − Mi ∗ Fk,
|
| 123 |
+
Mk ∗ Fr + Mr ∗ Fk + Mi ∗ Fj − Mj ∗ Fi)
|
| 124 |
+
(1)
|
| 125 |
+
where, M ⊛ F, and all others are quaternion numbers.
|
| 126 |
+
Or is the real part, and Oi, Oj, and Ok are the imag-
|
| 127 |
+
inary parts. Although there are 16 real-valued convo-
|
| 128 |
+
lutions in Equation 1, there are only four kernels that
|
| 129 |
+
are reused. The weight sharing happens this way [14]
|
| 130 |
+
which forces the model to learn cross-channel interre-
|
| 131 |
+
lationships. According to the quaternion definition, a
|
| 132 |
+
quaternion layer can accept four or m numbers of input
|
| 133 |
+
channels, where m is divisible by four. To process m
|
| 134 |
+
input channels (m ≥ 4), m/4 number of independent
|
| 135 |
+
quaternion convolution modules is required. Also, there
|
| 136 |
+
are m/4 weight sets where each module has its own
|
| 137 |
+
weight sets. Cross-channel weight sharing allows dis-
|
| 138 |
+
covering of cross-channel input correlations. Our weight
|
| 139 |
+
initialization was the same as [4].
|
| 140 |
+
2.2. Vectormap Convolution
|
| 141 |
+
We explain 3D generalized hypercomplex networks
|
| 142 |
+
or VCNNs as VCNNs are used in our proposed mod-
|
| 143 |
+
els. The VCNN is more flexible as it doesn’t require 4D.
|
| 144 |
+
|
| 145 |
+
Parameterized
|
| 146 |
+
STAGE 1
|
| 147 |
+
STAGE 2
|
| 148 |
+
STAGE 3
|
| 149 |
+
STAGE 4
|
| 150 |
+
Weight
|
| 151 |
+
Input
|
| 152 |
+
Horse
|
| 153 |
+
Cat
|
| 154 |
+
224x224
|
| 155 |
+
Classification
|
| 156 |
+
PHM
|
| 157 |
+
Stem Layer
|
| 158 |
+
based FC Layer
|
| 159 |
+
QCNN
|
| 160 |
+
64 Filters
|
| 161 |
+
1st Layer
|
| 162 |
+
2nd Layer
|
| 163 |
+
3rd Layer
|
| 164 |
+
4rth Layer
|
| 165 |
+
AHNN
|
| 166 |
+
AHNN
|
| 167 |
+
AHNN
|
| 168 |
+
Flatten
|
| 169 |
+
Filter size 7
|
| 170 |
+
AHNN
|
| 171 |
+
128 Filters
|
| 172 |
+
256 Filters
|
| 173 |
+
512 Filters
|
| 174 |
+
with stride 2
|
| 175 |
+
64 Filters
|
| 176 |
+
Layer
|
| 177 |
+
max-pooling
|
| 178 |
+
Stride 2
|
| 179 |
+
Stride 2
|
| 180 |
+
Stride 2
|
| 181 |
+
Stride 1
|
| 182 |
+
5-dimensional PHM laye
|
| 183 |
+
VPHM
|
| 184 |
+
TimerFigure 2. AHNN bottleneck block used in our proposed axial-hypercomplex networks. “bn”, “quat”, and “VCNN” stand for batch
|
| 185 |
+
normalization, quaternion CNN, and vectormap CNN, respectively.
|
| 186 |
+
However, still using cross channel weight sharing this is
|
| 187 |
+
seen in 3 × 3 matrix used in Equation 2, only three fil-
|
| 188 |
+
ters A, B, and C are used. The Vectormap convolution
|
| 189 |
+
operation is defined as:
|
| 190 |
+
�
|
| 191 |
+
�
|
| 192 |
+
R(M ∗ F)
|
| 193 |
+
I(M ∗ F)
|
| 194 |
+
J (M ∗ F)
|
| 195 |
+
�
|
| 196 |
+
� = L ⊙
|
| 197 |
+
�
|
| 198 |
+
�
|
| 199 |
+
A
|
| 200 |
+
B
|
| 201 |
+
C
|
| 202 |
+
C
|
| 203 |
+
A
|
| 204 |
+
B
|
| 205 |
+
B
|
| 206 |
+
C
|
| 207 |
+
A
|
| 208 |
+
�
|
| 209 |
+
� ∗
|
| 210 |
+
�
|
| 211 |
+
�
|
| 212 |
+
x
|
| 213 |
+
y
|
| 214 |
+
z
|
| 215 |
+
�
|
| 216 |
+
�
|
| 217 |
+
(2)
|
| 218 |
+
where, A, B, and C are real-valued kernels, x, y, and
|
| 219 |
+
z being real-valued vectors, and L is a learnable matrix,
|
| 220 |
+
L ∈ RD3×D3; where D3 stands for 3-dimensional input
|
| 221 |
+
channels. The initial value of this matrix L is defined as:
|
| 222 |
+
L =
|
| 223 |
+
�
|
| 224 |
+
�
|
| 225 |
+
1
|
| 226 |
+
1
|
| 227 |
+
1
|
| 228 |
+
-1
|
| 229 |
+
1
|
| 230 |
+
1
|
| 231 |
+
-1
|
| 232 |
+
1
|
| 233 |
+
1
|
| 234 |
+
�
|
| 235 |
+
�
|
| 236 |
+
(3)
|
| 237 |
+
Our weight initialization follows [5].
|
| 238 |
+
2.3. PHM Layer
|
| 239 |
+
Parameterized hypercomplex multiplication is an-
|
| 240 |
+
other form of generalized hypercomplex network, ex-
|
| 241 |
+
plained in [22].
|
| 242 |
+
As we use this PHM layer only in
|
| 243 |
+
the fully connected (FC) layer, our explanation is re-
|
| 244 |
+
stricted to this PHM-based dense layer. It is defined as,
|
| 245 |
+
y = Hx + b, where H ∈ Rk×d represents the PHM
|
| 246 |
+
layer and it is calculated as, H = �n
|
| 247 |
+
i=1 Ii ⊗ Ai, where
|
| 248 |
+
Ii ∈ Rn×n and Ai ∈ Rk/n×d/n are learnable parameter
|
| 249 |
+
matrices and i = 1 . . . n (n = 4 or 5). These matrices
|
| 250 |
+
can be reused which leads to parameter reduction. Also,
|
| 251 |
+
the ⊗ represents the Kronecker product. The flattened
|
| 252 |
+
layer which is the output of the CNN network is used as
|
| 253 |
+
an input to the PHM FC layer. These inputs are split as,
|
| 254 |
+
Qin = Qr + Qw + Qx + Qy + Qz and the outputs are
|
| 255 |
+
merged into Qout as, Qout = Qro+Qwo+Qxo+Qyo+
|
| 256 |
+
Qzo for 5D hypercomplex. The 4D hypercomplex pa-
|
| 257 |
+
rameter matrix is discussed in [22] which expresses the
|
| 258 |
+
Hamiltonian product, and the 5D hypercomplex param-
|
| 259 |
+
eter matrix of PHM operation is explained in [16]. This
|
| 260 |
+
5D parameter matrix is used to construct a 5D PHM FC
|
| 261 |
+
layer which preserves all properties of the PHM layer
|
| 262 |
+
and hypercomplex networks. This work uses 5D PHM
|
| 263 |
+
layer.
|
| 264 |
+
3. Proposed Axial Hypercomplex Networks
|
| 265 |
+
Complex convolutional neural networks (CCNNs),
|
| 266 |
+
QCNNs, Octonions convolutional neural networks (OC-
|
| 267 |
+
NNs), VCNNs, and PHM are the versions of HCNNs
|
| 268 |
+
that provide all advantages of HCNNs like weight shar-
|
| 269 |
+
ing across input channels, and the ability to discover
|
| 270 |
+
cross channel correlations. These HCNNs perform bet-
|
| 271 |
+
ter with fewer trainable parameters for vision applica-
|
| 272 |
+
tions. But, they are still computationally expensive. For
|
| 273 |
+
vision tasks, these HCNNs take O(N 2) resources for
|
| 274 |
+
an image of length N where N is the flattened pixel
|
| 275 |
+
set. For a 2D image of height h and width w, where
|
| 276 |
+
N
|
| 277 |
+
= hw, and h = w, the computational cost is
|
| 278 |
+
O((hw)2) = O(h2w2) = O(h4).
|
| 279 |
+
This
|
| 280 |
+
section
|
| 281 |
+
describes
|
| 282 |
+
our
|
| 283 |
+
proposed
|
| 284 |
+
axial-
|
| 285 |
+
hypercomplex model in Figures 1 and 2 to reduce
|
| 286 |
+
the computational cost. Axial networks were first used
|
| 287 |
+
in [8, 19].
|
| 288 |
+
To implement our proposed model, we
|
| 289 |
+
followed the assumption that images are approximately
|
| 290 |
+
square where the pixel count of h and w are the same,
|
| 291 |
+
and both are much less than the pixel count of hw [19].
|
| 292 |
+
To translate a quaternion convolutional bottleneck block
|
| 293 |
+
to an axial-hypercomplex bottleneck block, we replace
|
| 294 |
+
the 3 × 3 spatial quaternion convolutional operation
|
| 295 |
+
by two axial vectormap convolutional neural network
|
| 296 |
+
(VCNN) layers. These layers are applied to the height
|
| 297 |
+
|
| 298 |
+
m
|
| 299 |
+
-.-.-.-.-.-.-.
|
| 300 |
+
b
|
| 301 |
+
rel
|
| 302 |
+
bn
|
| 303 |
+
3x1 VCNN
|
| 304 |
+
Width-Axis
|
| 305 |
+
enb
|
| 306 |
+
nb
|
| 307 |
+
relu
|
| 308 |
+
1
|
| 309 |
+
X
|
| 310 |
+
X
|
| 311 |
+
X
|
| 312 |
+
S
|
| 313 |
+
H
|
| 314 |
+
-.
|
| 315 |
+
S
|
| 316 |
+
channel
|
| 317 |
+
s
|
| 318 |
+
sLayer
|
| 319 |
+
Output
|
| 320 |
+
size
|
| 321 |
+
Deep Quaternion
|
| 322 |
+
ResNet
|
| 323 |
+
Vectormap
|
| 324 |
+
ResNet
|
| 325 |
+
QPHM
|
| 326 |
+
Axial
|
| 327 |
+
Hypercomplex
|
| 328 |
+
Stem
|
| 329 |
+
32x32
|
| 330 |
+
3x3Q, 120, std=1
|
| 331 |
+
3x3V, 120, std=1
|
| 332 |
+
3x3Q, 120, std=1
|
| 333 |
+
3x3Q, 120, std=1
|
| 334 |
+
Bottleneck
|
| 335 |
+
group 1
|
| 336 |
+
32x32
|
| 337 |
+
�
|
| 338 |
+
�
|
| 339 |
+
1x1Q, 120
|
| 340 |
+
3x3Q, 120
|
| 341 |
+
1x1Q, 480
|
| 342 |
+
�
|
| 343 |
+
�×3
|
| 344 |
+
�
|
| 345 |
+
�
|
| 346 |
+
1x1V, 120
|
| 347 |
+
3x3V, 120
|
| 348 |
+
1x1V, 480
|
| 349 |
+
�
|
| 350 |
+
�×3
|
| 351 |
+
�
|
| 352 |
+
�
|
| 353 |
+
1x1QP, 120
|
| 354 |
+
3x3QP, 120
|
| 355 |
+
1x1QP, 480
|
| 356 |
+
�
|
| 357 |
+
�×3
|
| 358 |
+
�
|
| 359 |
+
���
|
| 360 |
+
1x1Q, 120
|
| 361 |
+
3x1AV, 120
|
| 362 |
+
1x3AV, 120
|
| 363 |
+
1x1Q, 480
|
| 364 |
+
�
|
| 365 |
+
���×3
|
| 366 |
+
Bottleneck
|
| 367 |
+
group 2
|
| 368 |
+
16x16
|
| 369 |
+
�
|
| 370 |
+
�
|
| 371 |
+
1x1Q, 240
|
| 372 |
+
3x3Q, 240
|
| 373 |
+
1x1Q, 960
|
| 374 |
+
�
|
| 375 |
+
�×4
|
| 376 |
+
�
|
| 377 |
+
�
|
| 378 |
+
1x1V, 240
|
| 379 |
+
3x3V, 240
|
| 380 |
+
1x1V, 960
|
| 381 |
+
�
|
| 382 |
+
�×4
|
| 383 |
+
�
|
| 384 |
+
�
|
| 385 |
+
1x1QP, 240
|
| 386 |
+
3x3QP, 240
|
| 387 |
+
1x1QP, 960
|
| 388 |
+
�
|
| 389 |
+
�×4
|
| 390 |
+
�
|
| 391 |
+
���
|
| 392 |
+
1x1Q, 240
|
| 393 |
+
3x1AV, 240
|
| 394 |
+
1x3AV, 240
|
| 395 |
+
1x1Q, 960
|
| 396 |
+
�
|
| 397 |
+
���×4
|
| 398 |
+
Bottleneck
|
| 399 |
+
group 3
|
| 400 |
+
8x8
|
| 401 |
+
�
|
| 402 |
+
�
|
| 403 |
+
1x1Q, 480
|
| 404 |
+
3x3Q, 480
|
| 405 |
+
1x1Q, 1920
|
| 406 |
+
�
|
| 407 |
+
�×6
|
| 408 |
+
�
|
| 409 |
+
�
|
| 410 |
+
1x1V, 480
|
| 411 |
+
3x3V, 480
|
| 412 |
+
1x1V, 1920
|
| 413 |
+
�
|
| 414 |
+
�×6
|
| 415 |
+
�
|
| 416 |
+
�
|
| 417 |
+
1x1QP, 480
|
| 418 |
+
3x3QP, 480
|
| 419 |
+
1x1QP, 1920
|
| 420 |
+
�
|
| 421 |
+
�×6
|
| 422 |
+
�
|
| 423 |
+
���
|
| 424 |
+
1x1Q, 480
|
| 425 |
+
3x1AV, 480
|
| 426 |
+
1x3AV, 480
|
| 427 |
+
1x1Q, 1920
|
| 428 |
+
�
|
| 429 |
+
���×6
|
| 430 |
+
Bottleneck
|
| 431 |
+
group 4
|
| 432 |
+
4x4
|
| 433 |
+
�
|
| 434 |
+
�
|
| 435 |
+
1x1Q, 960
|
| 436 |
+
3x3Q, 960
|
| 437 |
+
1x1Q, 3840
|
| 438 |
+
�
|
| 439 |
+
�×3
|
| 440 |
+
�
|
| 441 |
+
�
|
| 442 |
+
1x1V, 960
|
| 443 |
+
3x3V, 960
|
| 444 |
+
1x1V, 3840
|
| 445 |
+
�
|
| 446 |
+
�×3
|
| 447 |
+
�
|
| 448 |
+
�
|
| 449 |
+
1x1QP, 960
|
| 450 |
+
3x3QP, 960
|
| 451 |
+
1x1QP, 3840
|
| 452 |
+
�
|
| 453 |
+
�×3
|
| 454 |
+
�
|
| 455 |
+
���
|
| 456 |
+
1x1Q, 960
|
| 457 |
+
3x1AV, 960
|
| 458 |
+
1x3AV, 960
|
| 459 |
+
1x1Q, 3840
|
| 460 |
+
�
|
| 461 |
+
���×3
|
| 462 |
+
Pooling layer
|
| 463 |
+
1x1x100
|
| 464 |
+
global average-pool, 100 outputs
|
| 465 |
+
Output
|
| 466 |
+
1x1x100
|
| 467 |
+
fully connected layer, softmax
|
| 468 |
+
5D PHM layer
|
| 469 |
+
Table 1. The 50-layer architectures tested on CIFAR-100: quaternion ResNet [4, 5], vectormap ResNet [5], QPHM [16], and our
|
| 470 |
+
proposed axial-hypercomplex networks. Input is a 32x32x3 color image for CIFAR benchmarks. The number of stacked bottleneck
|
| 471 |
+
modules is specified by multipliers. “Q”, “V”, “QP”, “AV” and “std” denote quaternion convolution, 3D vectormap convolution,
|
| 472 |
+
QPHM (quaternion networks with 4D PHM layer), axial vectormap convolution, and stride correspondingly. Integers (e.g., 120,
|
| 473 |
+
240) denote the number of output channels. PHM layer stands for parameterized hypercomplex multiplication layer. This work
|
| 474 |
+
uses 5D PHM based FC layer.
|
| 475 |
+
axis (3 channels 3x1 VCNN layer) and width axis (3
|
| 476 |
+
channels 1x3 VCNN layer) sequentially. The two 1 × 1
|
| 477 |
+
quaternion convolutional layers remain unchanged
|
| 478 |
+
like the original QCNNs [4]. The 1 × 1 QCNNs are
|
| 479 |
+
responsible to reduce and then increase the number of
|
| 480 |
+
channels. This forms our proposed axial-hypercomplex
|
| 481 |
+
bottleneck block seen in Figure 2.
|
| 482 |
+
This block is
|
| 483 |
+
stacked multiple times to construct axial-hypercomplex
|
| 484 |
+
ResNets.
|
| 485 |
+
Axial-hypercomplex models only work on one di-
|
| 486 |
+
mension at a time but the input images are 2-
|
| 487 |
+
dimensional. For two-dimensional vision tasks, a square
|
| 488 |
+
2D input where h = w, so w2 = N, where N is the se-
|
| 489 |
+
quence length of the flattened pixel set, is split into two
|
| 490 |
+
1D vectors. The 3-channel VCNN operation is first ap-
|
| 491 |
+
plied along the 1D input image region of length h and
|
| 492 |
+
then applied along the 1D input image region of length
|
| 493 |
+
w. These two 1D operations finally merged together re-
|
| 494 |
+
duces cost to O(h · h2) = O(h3) from the HCNNs cost
|
| 495 |
+
of O(h4).
|
| 496 |
+
Each quaternion convolution accepts four channels
|
| 497 |
+
of input and produces four channels of output. Hence,
|
| 498 |
+
the required number of 1 × 1 quaternion conv2d mod-
|
| 499 |
+
ules equals the number of input channels divided by
|
| 500 |
+
four. The set of output channels of down-sampled 1 × 1
|
| 501 |
+
quaternion is merged into input to the axial VCNN mod-
|
| 502 |
+
ules, and the output channels of axial VCNN modules
|
| 503 |
+
are split into groups of four again for 1 × 1 up-sampled
|
| 504 |
+
quaternion conv2d layer [4,17]. One quaternion 2D con-
|
| 505 |
+
volution is applied to each group of four channels and
|
| 506 |
+
one vectormap 2D convolution is applied to each group
|
| 507 |
+
of three channels. Like vectormap, each axial vectormap
|
| 508 |
+
module takes three input channels. Thus, the weight-
|
| 509 |
+
sharing is compartmentalized into groups of four input
|
| 510 |
+
channels and then groups of three input channels.
|
| 511 |
+
For better representation, a quaternion convolution
|
| 512 |
+
layer is also used in the stem layer (first layer of the net-
|
| 513 |
+
work) as a quaternion-based frontend layer and the fully-
|
| 514 |
+
connected dense layer as a PHM-based backend layer of
|
| 515 |
+
deep axial-hypercomplex networks (DANs). Figure 1
|
| 516 |
+
illustrates our proposed axial-hypercomplex network ar-
|
| 517 |
+
chitecture.
|
| 518 |
+
4. Experiment
|
| 519 |
+
We conduct an extensive experiment on four classi-
|
| 520 |
+
fication datasets to analyze the effectiveness of our pro-
|
| 521 |
+
posed axial-hypercomplex model. As QCNNs, VCNNs,
|
| 522 |
+
residual networks (ResNets), QPHM [16], and VPHM
|
| 523 |
+
[16] all are performed 2D spatial convolution operation,
|
| 524 |
+
|
| 525 |
+
Model Name
|
| 526 |
+
Layers
|
| 527 |
+
Dataset
|
| 528 |
+
Params
|
| 529 |
+
FLOPS
|
| 530 |
+
Latency
|
| 531 |
+
Validation
|
| 532 |
+
Accuracy
|
| 533 |
+
ResNet [6]
|
| 534 |
+
40.9M
|
| 535 |
+
2.56G
|
| 536 |
+
0.86ms
|
| 537 |
+
94.68
|
| 538 |
+
ResNet-with-QPHM [16]
|
| 539 |
+
40.8M
|
| 540 |
+
2.55G
|
| 541 |
+
0.64ms
|
| 542 |
+
95.32
|
| 543 |
+
Quaternion [4]
|
| 544 |
+
10.2M
|
| 545 |
+
1.11G
|
| 546 |
+
0.65ms
|
| 547 |
+
94.89
|
| 548 |
+
Vectormap [5]
|
| 549 |
+
26
|
| 550 |
+
CIFAR10
|
| 551 |
+
13.6M
|
| 552 |
+
1.09G
|
| 553 |
+
0.65ms
|
| 554 |
+
94.76
|
| 555 |
+
QPHM [16]
|
| 556 |
+
10.2M
|
| 557 |
+
1.10G
|
| 558 |
+
0.64ms
|
| 559 |
+
95.26
|
| 560 |
+
VPHM [16]
|
| 561 |
+
13.6M
|
| 562 |
+
1.08G
|
| 563 |
+
0.67ms
|
| 564 |
+
95.15
|
| 565 |
+
Axial-Hypercomplex
|
| 566 |
+
6.2M
|
| 567 |
+
1.06G
|
| 568 |
+
0.68ms
|
| 569 |
+
95.91-95.85
|
| 570 |
+
ResNet [6]
|
| 571 |
+
57.8M
|
| 572 |
+
3.31G
|
| 573 |
+
1.08ms
|
| 574 |
+
94.95
|
| 575 |
+
ResNet-with-QPHM [16]
|
| 576 |
+
57.7M
|
| 577 |
+
3.31G
|
| 578 |
+
0.81ms
|
| 579 |
+
95.80
|
| 580 |
+
Quaternion [4]
|
| 581 |
+
14.5M
|
| 582 |
+
1.47G
|
| 583 |
+
0.82ms
|
| 584 |
+
95.33
|
| 585 |
+
Vectormap [5]
|
| 586 |
+
35
|
| 587 |
+
CIFAR10
|
| 588 |
+
19.3M
|
| 589 |
+
1.45G
|
| 590 |
+
0.84ms
|
| 591 |
+
95.06
|
| 592 |
+
QPHM [16]
|
| 593 |
+
14.5M
|
| 594 |
+
1.46G
|
| 595 |
+
0.79ms
|
| 596 |
+
95.55
|
| 597 |
+
VPHM [16]
|
| 598 |
+
19.3M
|
| 599 |
+
1.44G
|
| 600 |
+
0.82ms
|
| 601 |
+
95.60
|
| 602 |
+
Axial-Hypercomplex
|
| 603 |
+
9.2M
|
| 604 |
+
1.36G
|
| 605 |
+
0.84ms
|
| 606 |
+
96.49-96.45
|
| 607 |
+
ResNet [6]
|
| 608 |
+
82.5M
|
| 609 |
+
4.57G
|
| 610 |
+
1.32ms
|
| 611 |
+
94.08
|
| 612 |
+
ResNet-with-QPHM [16]
|
| 613 |
+
82.5M
|
| 614 |
+
4.57G
|
| 615 |
+
0.81ms
|
| 616 |
+
95.86
|
| 617 |
+
Quaternion [4]
|
| 618 |
+
21.09M
|
| 619 |
+
1.93G
|
| 620 |
+
1.06ms
|
| 621 |
+
95.42
|
| 622 |
+
Vectormap [5]
|
| 623 |
+
50
|
| 624 |
+
CIFAR10
|
| 625 |
+
27.6M
|
| 626 |
+
1.93G
|
| 627 |
+
1.13ms
|
| 628 |
+
95.37
|
| 629 |
+
QPHM [16]
|
| 630 |
+
20.7M
|
| 631 |
+
1.92G
|
| 632 |
+
1.06ms
|
| 633 |
+
95.75
|
| 634 |
+
VPHM [16]
|
| 635 |
+
27.5M
|
| 636 |
+
1.92G
|
| 637 |
+
1.08ms
|
| 638 |
+
95.76
|
| 639 |
+
Axial-Hypercomplex
|
| 640 |
+
13.6M
|
| 641 |
+
1.75G
|
| 642 |
+
1.09ms
|
| 643 |
+
96.79-96.71
|
| 644 |
+
ResNet [6]
|
| 645 |
+
41.2M
|
| 646 |
+
2.56G
|
| 647 |
+
0.89ms
|
| 648 |
+
78.21
|
| 649 |
+
ResNet-with-QPHM [16]
|
| 650 |
+
40.9M
|
| 651 |
+
2.56G
|
| 652 |
+
0.64ms
|
| 653 |
+
79.14
|
| 654 |
+
Quaternion [4]
|
| 655 |
+
10.6M
|
| 656 |
+
1.15G
|
| 657 |
+
0.64ms
|
| 658 |
+
77.65
|
| 659 |
+
Vectormap [5]
|
| 660 |
+
26
|
| 661 |
+
CIFAR100
|
| 662 |
+
13.6M
|
| 663 |
+
1.15G
|
| 664 |
+
0.64ms
|
| 665 |
+
77.65
|
| 666 |
+
QPHM [16]
|
| 667 |
+
10.3M
|
| 668 |
+
1.11G
|
| 669 |
+
0.65ms
|
| 670 |
+
78.15
|
| 671 |
+
VPHM [16]
|
| 672 |
+
13.7M
|
| 673 |
+
1.09G
|
| 674 |
+
0.66ms
|
| 675 |
+
78.14
|
| 676 |
+
Axial-Hypercomplex
|
| 677 |
+
6.2M
|
| 678 |
+
1.06G
|
| 679 |
+
0.69ms
|
| 680 |
+
79.42-79.24
|
| 681 |
+
ResNet [6]
|
| 682 |
+
58.1M
|
| 683 |
+
3.31G
|
| 684 |
+
1.07ms
|
| 685 |
+
78.72
|
| 686 |
+
ResNet-with-QPHM [16]
|
| 687 |
+
57.8M
|
| 688 |
+
3.31G
|
| 689 |
+
0.81ms
|
| 690 |
+
79.65
|
| 691 |
+
Quaternion [4]
|
| 692 |
+
14.5M
|
| 693 |
+
1.51G
|
| 694 |
+
0.81ms
|
| 695 |
+
78.96
|
| 696 |
+
Vectormap [5]
|
| 697 |
+
35
|
| 698 |
+
CIFAR100
|
| 699 |
+
19.3M
|
| 700 |
+
1.48G
|
| 701 |
+
0.84ms
|
| 702 |
+
79.52
|
| 703 |
+
QPHM [16]
|
| 704 |
+
14.5M
|
| 705 |
+
1.47G
|
| 706 |
+
0.82ms
|
| 707 |
+
78.46
|
| 708 |
+
VPHM [16]
|
| 709 |
+
19.6M
|
| 710 |
+
1.45G
|
| 711 |
+
0.82ms
|
| 712 |
+
79.86
|
| 713 |
+
Axial-Hypercomplex
|
| 714 |
+
9.2M
|
| 715 |
+
1.36G
|
| 716 |
+
0.85ms
|
| 717 |
+
79.93-79.63
|
| 718 |
+
ResNet [6]
|
| 719 |
+
82.9M
|
| 720 |
+
4.57G
|
| 721 |
+
1.36ms
|
| 722 |
+
78.95
|
| 723 |
+
ResNet-with-QPHM [16]
|
| 724 |
+
82.6M
|
| 725 |
+
4.57G
|
| 726 |
+
1.09ms
|
| 727 |
+
79.89
|
| 728 |
+
Quaternion [4]
|
| 729 |
+
21.09M
|
| 730 |
+
1.96G
|
| 731 |
+
1.06ms
|
| 732 |
+
79.17
|
| 733 |
+
Vectormap [5]
|
| 734 |
+
50
|
| 735 |
+
CIFAR100
|
| 736 |
+
27.6M
|
| 737 |
+
1.93G
|
| 738 |
+
1.13ms
|
| 739 |
+
79.39
|
| 740 |
+
QPHM [16]
|
| 741 |
+
20.7M
|
| 742 |
+
1.93G
|
| 743 |
+
1.05ms
|
| 744 |
+
78.22
|
| 745 |
+
VPHM [16]
|
| 746 |
+
27.5M
|
| 747 |
+
1.92G
|
| 748 |
+
1.08ms
|
| 749 |
+
79.49
|
| 750 |
+
Axial-Hypercomplex
|
| 751 |
+
13.6M
|
| 752 |
+
1.75G
|
| 753 |
+
1.09ms
|
| 754 |
+
80.81-80.75
|
| 755 |
+
Table 2. Image classification performance on the CIFAR benchmarks for 26, 35, and 50-layer architectures. Here, QPHM, and
|
| 756 |
+
VPHM define the quaternion networks with PHM FC layer, and vectormap networks with the PHM FC layer, respectively.
|
| 757 |
+
therefore we compare our proposed axial hypercomplex
|
| 758 |
+
networks performance with the above-mentioned base-
|
| 759 |
+
line models. Among them, all models perform Hamilto-
|
| 760 |
+
nian products like our proposed model except ResNets.
|
| 761 |
+
4.1. Method
|
| 762 |
+
We conducted our experiments by using five-
|
| 763 |
+
dimensional PHM dense layer in the backend of the
|
| 764 |
+
network, quaternion network at the beginning of the
|
| 765 |
+
|
| 766 |
+
Model Name
|
| 767 |
+
Layers
|
| 768 |
+
Params
|
| 769 |
+
FLOPS
|
| 770 |
+
Latency
|
| 771 |
+
Validation Accuracy
|
| 772 |
+
ResNet [6]
|
| 773 |
+
40.9M
|
| 774 |
+
2.56G
|
| 775 |
+
0.82ms
|
| 776 |
+
96.04
|
| 777 |
+
ResNet-with-QPHM [16]
|
| 778 |
+
40.8M
|
| 779 |
+
2.56G
|
| 780 |
+
0.62ms
|
| 781 |
+
96.64
|
| 782 |
+
Quaternion [4]
|
| 783 |
+
10.2M
|
| 784 |
+
1.11G
|
| 785 |
+
0.66ms
|
| 786 |
+
95.88
|
| 787 |
+
Vectormap [5]
|
| 788 |
+
26
|
| 789 |
+
13.6M
|
| 790 |
+
1.10G
|
| 791 |
+
0.66ms
|
| 792 |
+
95.93
|
| 793 |
+
QPHM [16]
|
| 794 |
+
10.2M
|
| 795 |
+
1.10G
|
| 796 |
+
0.62ms
|
| 797 |
+
95.97
|
| 798 |
+
VPHM [16]
|
| 799 |
+
13.6M
|
| 800 |
+
1.08G
|
| 801 |
+
0.64ms
|
| 802 |
+
96.24
|
| 803 |
+
Axial-Hypercomplex
|
| 804 |
+
6.2M
|
| 805 |
+
1.06G
|
| 806 |
+
0.69ms
|
| 807 |
+
97.21-97.05
|
| 808 |
+
ResNet [6]
|
| 809 |
+
57.8M
|
| 810 |
+
3.31G
|
| 811 |
+
0.98ms
|
| 812 |
+
95.74
|
| 813 |
+
ResNet-with-QPHM [16]
|
| 814 |
+
57.7M
|
| 815 |
+
3.31G
|
| 816 |
+
0.79ms
|
| 817 |
+
96.22
|
| 818 |
+
Quaternion [4]
|
| 819 |
+
14.5M
|
| 820 |
+
1.47G
|
| 821 |
+
0.84ms
|
| 822 |
+
95.95
|
| 823 |
+
Vectormap [5]
|
| 824 |
+
35
|
| 825 |
+
19.5M
|
| 826 |
+
1.45G
|
| 827 |
+
0.84ms
|
| 828 |
+
95.97
|
| 829 |
+
QPHM [16]
|
| 830 |
+
14.5M
|
| 831 |
+
1.45G
|
| 832 |
+
0.82ms
|
| 833 |
+
95.99
|
| 834 |
+
VPHM [16]
|
| 835 |
+
19.3M
|
| 836 |
+
1.44G
|
| 837 |
+
0.82ms
|
| 838 |
+
96.34
|
| 839 |
+
Axial-Hypercomplex
|
| 840 |
+
9.2M
|
| 841 |
+
1.36G
|
| 842 |
+
0.85ms
|
| 843 |
+
97.25-96.90
|
| 844 |
+
ResNet [6]
|
| 845 |
+
82.5M
|
| 846 |
+
4.57G
|
| 847 |
+
1.19ms
|
| 848 |
+
95.76
|
| 849 |
+
ResNet-with-QPHM [16]
|
| 850 |
+
82.5M
|
| 851 |
+
4.57G
|
| 852 |
+
1.04ms
|
| 853 |
+
96.78
|
| 854 |
+
Quaternion [4]
|
| 855 |
+
20.7M
|
| 856 |
+
1.94G
|
| 857 |
+
1.04ms
|
| 858 |
+
96.24
|
| 859 |
+
Vectormap [5]
|
| 860 |
+
50
|
| 861 |
+
27.6M
|
| 862 |
+
1.93G
|
| 863 |
+
1.11ms
|
| 864 |
+
96.39
|
| 865 |
+
QPHM [16]
|
| 866 |
+
20.7M
|
| 867 |
+
1.93G
|
| 868 |
+
1.04ms
|
| 869 |
+
96.46
|
| 870 |
+
VPHM [16]
|
| 871 |
+
27.5M
|
| 872 |
+
1.92G
|
| 873 |
+
1.09ms
|
| 874 |
+
96.49
|
| 875 |
+
Axial-Hypercomplex
|
| 876 |
+
13.6M
|
| 877 |
+
1.75G
|
| 878 |
+
1.11ms
|
| 879 |
+
97.47-97.25
|
| 880 |
+
Table 3. Image classification performance on the SVHN benchmarks for 26, 35, and 50-layer architectures. Here, QPHM, and
|
| 881 |
+
VPHM define the quaternion networks with PHM FC layer, and vectormap networks with PHM FC layer, respectively.
|
| 882 |
+
Model Name
|
| 883 |
+
Layers
|
| 884 |
+
Params
|
| 885 |
+
FLOPS
|
| 886 |
+
Latency
|
| 887 |
+
Validation Accuracy
|
| 888 |
+
ResNet [6]
|
| 889 |
+
41.6M
|
| 890 |
+
10.2G
|
| 891 |
+
3.06ms
|
| 892 |
+
57.21
|
| 893 |
+
ResNet-with-QPHM [16]
|
| 894 |
+
41M
|
| 895 |
+
2.56G
|
| 896 |
+
2.31ms
|
| 897 |
+
57.84
|
| 898 |
+
Quaternion [4]
|
| 899 |
+
11.02M
|
| 900 |
+
4.54G
|
| 901 |
+
2.48ms
|
| 902 |
+
53.84
|
| 903 |
+
Vectormap [5]
|
| 904 |
+
26
|
| 905 |
+
14.4M
|
| 906 |
+
4.56G
|
| 907 |
+
2.88ms
|
| 908 |
+
56.15
|
| 909 |
+
QPHM [16]
|
| 910 |
+
10.4M
|
| 911 |
+
1.11G
|
| 912 |
+
2.31ms
|
| 913 |
+
54.02
|
| 914 |
+
VPHM [16]
|
| 915 |
+
13.8M
|
| 916 |
+
4.44G
|
| 917 |
+
3.27ms
|
| 918 |
+
53.11
|
| 919 |
+
Axial-Hypercomplex
|
| 920 |
+
6.3M
|
| 921 |
+
1.06G
|
| 922 |
+
2.49ms
|
| 923 |
+
58.56-58.06
|
| 924 |
+
ResNet [6]
|
| 925 |
+
58.5M
|
| 926 |
+
13.2G
|
| 927 |
+
3.21ms
|
| 928 |
+
57.80
|
| 929 |
+
ResNet-with-QPHM [16]
|
| 930 |
+
57.9M
|
| 931 |
+
3.31G
|
| 932 |
+
2.85ms
|
| 933 |
+
59
|
| 934 |
+
Quaternion [4]
|
| 935 |
+
15.2M
|
| 936 |
+
5.98G
|
| 937 |
+
3.52ms
|
| 938 |
+
54.53
|
| 939 |
+
Vectormap [5]
|
| 940 |
+
35
|
| 941 |
+
20.07M
|
| 942 |
+
5.98G
|
| 943 |
+
3.76ms
|
| 944 |
+
55.99
|
| 945 |
+
QPHM [16]
|
| 946 |
+
14.6M
|
| 947 |
+
1.47G
|
| 948 |
+
2.88ms
|
| 949 |
+
56.42
|
| 950 |
+
VPHM [16]
|
| 951 |
+
19.4M
|
| 952 |
+
5.88G
|
| 953 |
+
4.08ms
|
| 954 |
+
56.10
|
| 955 |
+
Axial-Hypercomplex
|
| 956 |
+
9.3M
|
| 957 |
+
1.36G
|
| 958 |
+
2.97ms
|
| 959 |
+
60.06-59.87
|
| 960 |
+
ResNet [6]
|
| 961 |
+
83.2M
|
| 962 |
+
18.2G
|
| 963 |
+
3.77ms
|
| 964 |
+
59.06
|
| 965 |
+
ResNet-with-QPHM [16]
|
| 966 |
+
82.6M
|
| 967 |
+
4.57G
|
| 968 |
+
3.66ms
|
| 969 |
+
60.30
|
| 970 |
+
Quaternion [4]
|
| 971 |
+
21.4M
|
| 972 |
+
7.87G
|
| 973 |
+
4.14ms
|
| 974 |
+
56.63
|
| 975 |
+
Vectormap [5]
|
| 976 |
+
50
|
| 977 |
+
28.3M
|
| 978 |
+
7.87G
|
| 979 |
+
4.34ms
|
| 980 |
+
57.52
|
| 981 |
+
QPHM [16]
|
| 982 |
+
20.8M
|
| 983 |
+
1.93G
|
| 984 |
+
3.88ms
|
| 985 |
+
59.42
|
| 986 |
+
VPHM [16]
|
| 987 |
+
27.7M
|
| 988 |
+
7.75G
|
| 989 |
+
4.51ms
|
| 990 |
+
58.96
|
| 991 |
+
Axial-Hypercomplex
|
| 992 |
+
13.7M
|
| 993 |
+
1.75G
|
| 994 |
+
3.93ms
|
| 995 |
+
62.73-62.07
|
| 996 |
+
Table 4. Image classification performance on the Tiny ImageNet benchmarks for 26, 35, and 50-layer architectures. Here, QPHM,
|
| 997 |
+
and VPHM define the quaternion networks with PHM FC layer, and vectormap networks with PHM FC layer, respectively.
|
| 998 |
+
network, and axial-hypercomplex residual bottleneck
|
| 999 |
+
block on CIFAR benchmark datasets [10], Street View
|
| 1000 |
+
House Numbers (SVHN) [12], and Tiny ImageNet [11]
|
| 1001 |
+
datasets.
|
| 1002 |
+
|
| 1003 |
+
The models we tested to compare with our proposed
|
| 1004 |
+
model, are: the standard DCNNs [6], the DQNNs [4],
|
| 1005 |
+
the axial-ResNet with QPHM [16], QPHM [16], VPHM
|
| 1006 |
+
[16], and our proposed method. CIFAR-10 and CIFAR-
|
| 1007 |
+
100 datasets consist of 60,000 color images of size 32
|
| 1008 |
+
× 32 pixels. These datasets fall into 10 and 100 dis-
|
| 1009 |
+
tinct classes and are split into a training set with 50,000
|
| 1010 |
+
images and a test set with 10,000 images. We perform
|
| 1011 |
+
standard data augmentation schemes for these datasets
|
| 1012 |
+
like [4–6,16]. Both datasets were normalized using per-
|
| 1013 |
+
channel mean and standard deviation. We perform hori-
|
| 1014 |
+
zontal flips and take random crops from images padded
|
| 1015 |
+
by 4 pixels on each side to obtain a 40 × 40 pixel image,
|
| 1016 |
+
then a 32 × 32 crop is randomly extracted.
|
| 1017 |
+
SVHN contains about 600,000 digit images [12]. For
|
| 1018 |
+
experiments on SVHN we don’t do any image pre-
|
| 1019 |
+
processing, except simple mean/std normalization. We
|
| 1020 |
+
use similar augmentation for the Tiny ImageNet dataset
|
| 1021 |
+
which contains 100,000 training images of 200 classes
|
| 1022 |
+
(500 for each class) downsized to 64×64 colored images.
|
| 1023 |
+
The test set contains 10,000 images [11].
|
| 1024 |
+
All baseline models were trained using the same
|
| 1025 |
+
components as the real-valued networks, the original
|
| 1026 |
+
quaternion network, the original vectormap network,
|
| 1027 |
+
the QPHM, and the VPHM networks using the same
|
| 1028 |
+
datasets.
|
| 1029 |
+
All models in Table 2 were trained using
|
| 1030 |
+
the same hyperparameters and the same number of out-
|
| 1031 |
+
put channels. The 50-layer architectural details of the
|
| 1032 |
+
above-mentioned models are depicted in Table 1 for the
|
| 1033 |
+
CIFAR-100 dataset. Due to space limitation, the deep
|
| 1034 |
+
ResNets and VPHM network architectures are not de-
|
| 1035 |
+
picted in the architecture Table 1.
|
| 1036 |
+
In the stem layer, the 3 × 3 convolution network
|
| 1037 |
+
is used for deep ResNets [6], 3 × 3 quaternion net-
|
| 1038 |
+
work is used for the deep quaternion ResNets [4, 18],
|
| 1039 |
+
for the QPHM [16], and axial-hypercomplex networks
|
| 1040 |
+
(our proposed method), and 3 × 3 vectormap network is
|
| 1041 |
+
used for the deep vectormap ResNets [5], and the VPHM
|
| 1042 |
+
[16] networks with stride 1 & 120 output filters. We
|
| 1043 |
+
use parameterized hypercomplex multiplication (PHM)
|
| 1044 |
+
for the dense layer in the backend of deep ResNets,
|
| 1045 |
+
QPHM, VPHM, and our proposed axial-hypercomplex
|
| 1046 |
+
networks. In the bottleneck block, the number of out-
|
| 1047 |
+
put channels of bottleneck groups are 120, 240, 480, &
|
| 1048 |
+
960 for all networks. In this experiment, we analyze 26-
|
| 1049 |
+
layer, 35-layer, and 50-layer architectures with the bot-
|
| 1050 |
+
tleneck block multipliers “[1, 2, 4, 1]”, “[2, 3, 4, 2]”, and
|
| 1051 |
+
“[3, 4, 6, 3]”. These are depicted in Table 1.
|
| 1052 |
+
We ran all of the models using stochastic gradient de-
|
| 1053 |
+
cent optimizer. We used linearly warmed-up learning
|
| 1054 |
+
from zero to 0.1 for the first 10 epochs and then used
|
| 1055 |
+
cosine learning rate scheduling from epochs 11 to 150.
|
| 1056 |
+
All models were trained for 128 batch sizes.
|
| 1057 |
+
4.2. Results
|
| 1058 |
+
The overall results of all models (base models and
|
| 1059 |
+
our proposed networks) appear in Tables 2, 3, and 4.
|
| 1060 |
+
The top half of Table 2 shows the results for the CI-
|
| 1061 |
+
FAR10 dataset and the bottom half presents the results
|
| 1062 |
+
for the CIFAR100. Both datasets have been tested by
|
| 1063 |
+
the 26, 35, and 50 layers architectures. These are the pa-
|
| 1064 |
+
rameter count, FLOPS count (number of multiply-add
|
| 1065 |
+
operations), inference time or Latency (time required
|
| 1066 |
+
to process a single image), and the percentage accu-
|
| 1067 |
+
racy of validation results for each model. We evaluate
|
| 1068 |
+
original ResNets [6], ResNet with QPHM [16], orig-
|
| 1069 |
+
inal quaternion networks [4], original vectormap net-
|
| 1070 |
+
works [5], QPHM [16], and VPHM [16] with the same
|
| 1071 |
+
configuration like our proposed axial-hypercomplex net-
|
| 1072 |
+
works. Our proposed axial-hypercomplex networks per-
|
| 1073 |
+
form better in validation accuracy with lower param-
|
| 1074 |
+
eter count and FLOPS for CIFAR-10 and CIFAR-100
|
| 1075 |
+
datasets than the baseline networks.
|
| 1076 |
+
More precisely,
|
| 1077 |
+
our proposed method takes almost 6 times, 1/3 times,
|
| 1078 |
+
1/2 times, 1/3 times, and 1/2 times fewer parameters
|
| 1079 |
+
than the ResNets, quaternion networks, vectormap net-
|
| 1080 |
+
works, QPHM, and VPHM respectively.
|
| 1081 |
+
Moreover,
|
| 1082 |
+
axial-hypercomplex networks achieved state-of-the-art
|
| 1083 |
+
results for these CIFAR benchmarks in hypercomplex
|
| 1084 |
+
space.
|
| 1085 |
+
The performances for SVHN and Tiny ImageNet
|
| 1086 |
+
datasets are shown in Tables 3 and 4 for all architec-
|
| 1087 |
+
tures. The axial-hypercomplex network’s validation ac-
|
| 1088 |
+
curacies outperform the other base networks with fewer
|
| 1089 |
+
trainable parameters and FLOPS like CIFAR datasets.
|
| 1090 |
+
The result Tables 2, 3, and 4 show our proposed model
|
| 1091 |
+
performance ranges of three runs. However, the latency
|
| 1092 |
+
of axial-hypercomplex networks is a little bit higher in
|
| 1093 |
+
some cases than the quaternion-based networks. This
|
| 1094 |
+
may be due to the use of vectormap networks along with
|
| 1095 |
+
quaternion networks as the latency for vectormap net-
|
| 1096 |
+
works is higher.
|
| 1097 |
+
5. Discussion and Conclusions
|
| 1098 |
+
This paper proposes axial-hypercomplex convolu-
|
| 1099 |
+
tions to reduce the cost of 2D convolutional opera-
|
| 1100 |
+
tions and shows the effectiveness of image classifi-
|
| 1101 |
+
cation tasks.
|
| 1102 |
+
We also applied 4D PHM in the net-
|
| 1103 |
+
work’s backend.
|
| 1104 |
+
On CIFAR benchmarks, our pro-
|
| 1105 |
+
posed Axial-hypercomplex network, formed by stacking
|
| 1106 |
+
axial-vectormap convolution (three-dimensional) in the
|
| 1107 |
+
quaternion bottleneck blocks, achieved state-of-the-art
|
| 1108 |
+
results among hypercomplex networks.
|
| 1109 |
+
Our main conclusion is that using quaternion convo-
|
| 1110 |
+
lutions as the frontend stem layer, four/five-dimensional
|
| 1111 |
+
PHM-based densely connected backend layer, and axial-
|
| 1112 |
+
hypercomplex bottleneck block improves classification
|
| 1113 |
+
|
| 1114 |
+
performance on the CIFAR benchmarks, SVHN, and
|
| 1115 |
+
Tiny ImageNet datasets in comparison to the other
|
| 1116 |
+
models we tested. Our proposed method factorizes a
|
| 1117 |
+
channel-wise 2D convolution (hypercomplex convolu-
|
| 1118 |
+
tion which works along the channels) to a column con-
|
| 1119 |
+
volution and a row convolution. Extensive experiments
|
| 1120 |
+
show that this leads to systematic improvement with far
|
| 1121 |
+
fewer trainable parameters on image classification. This
|
| 1122 |
+
proposed method can save 33%, and 50% trainable pa-
|
| 1123 |
+
rameters compared to original quaternion and vectormap
|
| 1124 |
+
networks and QPHM and VPHM networks, respectively.
|
| 1125 |
+
Although our proposed axial-hypercomplex design
|
| 1126 |
+
reduced parameter counts and FLOPS, it exhibited
|
| 1127 |
+
higher latency than real-valued and hypercomplex-
|
| 1128 |
+
valued convolutional networks.
|
| 1129 |
+
This is because the
|
| 1130 |
+
model performs convolution twice (height-axis and
|
| 1131 |
+
width-axis) and it takes transition time from 2D convo-
|
| 1132 |
+
lution to two consecutive 1D convolutions. As we re-
|
| 1133 |
+
placed spatial quaternion (four-dimensional hypercom-
|
| 1134 |
+
plex network) 2D convolution using two axial vec-
|
| 1135 |
+
tormap (three-dimensional hypercomplex network) 1D
|
| 1136 |
+
convolutions, the number of output channels are re-
|
| 1137 |
+
stricted to 120 or a multiple of 120 which are divisible by
|
| 1138 |
+
three and four. Our investigation concludes that the per-
|
| 1139 |
+
formance comparison between the hypercomplex net-
|
| 1140 |
+
works and our proposed axial-hypercomplex networks
|
| 1141 |
+
shows that the axial-hypercomplex convolution provides
|
| 1142 |
+
better validation performance with fewer trainable pa-
|
| 1143 |
+
rameters and FLOPS for image classification tasks.
|
| 1144 |
+
Further work may be directed toward the architecture
|
| 1145 |
+
of the axial quaternion network and axial vectormap net-
|
| 1146 |
+
work. Moreover, other datasets will be tested to check
|
| 1147 |
+
whether these proposed architectures can perform in a
|
| 1148 |
+
similar manner or not.
|
| 1149 |
+
Finally, axial-quaternion and
|
| 1150 |
+
axial-vectormap convolutional methods will help to re-
|
| 1151 |
+
move the number of output channels constrained as it
|
| 1152 |
+
will divisible by four for axial-quaternion networks and
|
| 1153 |
+
three for axial-vectormap networks.
|
| 1154 |
+
References
|
| 1155 |
+
[1] Martin Arjovsky, Amar Shah, and Yoshua Bengio. Uni-
|
| 1156 |
+
tary evolution recurrent neural networks.
|
| 1157 |
+
In Interna-
|
| 1158 |
+
tional Conference on Machine Learning, pages 1120–
|
| 1159 |
+
1128. PMLR, 2016. 1
|
| 1160 |
+
[2] Pierre Buyssens, Abderrahim Elmoataz, and Olivier
|
| 1161 |
+
L´ezoray. Multiscale convolutional neural networks for
|
| 1162 |
+
vision–based classification of cells. In Asian Conference
|
| 1163 |
+
on Computer Vision, pages 342–352. Springer, 2012. 1
|
| 1164 |
+
[3] Ivo Danihelka, Greg Wayne, Benigno Uria, Nal Kalch-
|
| 1165 |
+
brenner, and Alex Graves. Associative long short-term
|
| 1166 |
+
memory. In International Conference on Machine Learn-
|
| 1167 |
+
ing, pages 1986–1994. PMLR, 2016. 1
|
| 1168 |
+
[4] Chase J Gaudet and Anthony S Maida.
|
| 1169 |
+
Deep quater-
|
| 1170 |
+
nion networks. In 2018 International Joint Conference
|
| 1171 |
+
on Neural Networks (IJCNN), pages 1–8. IEEE, 2018. 1,
|
| 1172 |
+
2, 4, 5, 6, 7
|
| 1173 |
+
[5] Chase J Gaudet and Anthony S Maida. Removing di-
|
| 1174 |
+
mensional restrictions on complex/hyper-complex neural
|
| 1175 |
+
networks. In 2021 IEEE International Conference on Im-
|
| 1176 |
+
age Processing (ICIP), pages 319–323. IEEE, 2021. 1,
|
| 1177 |
+
3, 4, 5, 6, 7
|
| 1178 |
+
[6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian
|
| 1179 |
+
Sun. Deep residual learning for image recognition. In
|
| 1180 |
+
Proceedings of the IEEE conference on computer vision
|
| 1181 |
+
and pattern recognition, pages 770–778, 2016. 5, 6, 7
|
| 1182 |
+
[7] Akira Hirose and Shotaro Yoshida. Generalization char-
|
| 1183 |
+
acteristics of complex-valued feedforward neural net-
|
| 1184 |
+
works in relation to signal coherence. IEEE Transactions
|
| 1185 |
+
on Neural Networks and learning systems, 23(4):541–
|
| 1186 |
+
551, 2012. 1
|
| 1187 |
+
[8] Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, and
|
| 1188 |
+
Tim Salimans. Axial attention in multidimensional trans-
|
| 1189 |
+
formers. arXiv preprint arXiv:1912.12180, 2019. 3
|
| 1190 |
+
[9] Shima Javanmardi,
|
| 1191 |
+
Seyed-Hassan Miraei Ashtiani,
|
| 1192 |
+
Fons J Verbeek, and Alex Martynenko. Computer-vision
|
| 1193 |
+
classification of corn seed varieties using deep convolu-
|
| 1194 |
+
tional neural network. Journal of Stored Products Re-
|
| 1195 |
+
search, 92:101800, 2021. 1
|
| 1196 |
+
[10] Alex Krizhevsky, Geoffrey Hinton, et al. Learning mul-
|
| 1197 |
+
tiple layers of features from tiny images. 2009. 6
|
| 1198 |
+
[11] Ya Le and Xuan S. Yang. Tiny imagenet visual recogni-
|
| 1199 |
+
tion challenge. 2015. 6, 7
|
| 1200 |
+
[12] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bis-
|
| 1201 |
+
sacco, Bo Wu, and Andrew Y Ng. Reading digits in nat-
|
| 1202 |
+
ural images with unsupervised feature learning. 2011. 6,
|
| 1203 |
+
7
|
| 1204 |
+
[13] Tohru Nitta.
|
| 1205 |
+
On the critical points of the complex-
|
| 1206 |
+
valued neural network. In Proceedings of the 9th Inter-
|
| 1207 |
+
national Conference on Neural Information Processing,
|
| 1208 |
+
2002. ICONIP’02., volume 3, pages 1099–1103. IEEE,
|
| 1209 |
+
2002. 1
|
| 1210 |
+
[14] Titouan Parcollet, Mohamed Morchid, and Georges
|
| 1211 |
+
Linar`es. Quaternion convolutional neural networks for
|
| 1212 |
+
heterogeneous image processing. In ICASSP 2019-2019
|
| 1213 |
+
IEEE International Conference on Acoustics, Speech and
|
| 1214 |
+
Signal Processing (ICASSP), pages 8514–8518. IEEE,
|
| 1215 |
+
2019. 1, 2
|
| 1216 |
+
[15] Titouan Parcollet, Mirco Ravanelli, Mohamed Morchid,
|
| 1217 |
+
Georges Linar`es, Chiheb Trabelsi, Renato De Mori, and
|
| 1218 |
+
Yoshua Bengio. Quaternion recurrent neural networks.
|
| 1219 |
+
arXiv preprint arXiv:1806.04418, 2018. 1, 2
|
| 1220 |
+
[16] Nazmul Shahadat and Anthony Maida. Enhancing resnet
|
| 1221 |
+
image classification performance by using parameterized
|
| 1222 |
+
hypercomplex multiplication, Nov 2021. 1, 2, 3, 4, 5, 6,
|
| 1223 |
+
7
|
| 1224 |
+
[17] Nazmul Shahadat and Anthony S Maida. Adding quater-
|
| 1225 |
+
nion representations to attention networks for classifica-
|
| 1226 |
+
tion. arXiv preprint arXiv:2110.01185, 2021. 1, 4
|
| 1227 |
+
[18] Chiheb Trabelsi, Olexa Bilaniuk, Ying Zhang, Dmitriy
|
| 1228 |
+
Serdyuk, Sandeep Subramanian, Joao Felipe Santos,
|
| 1229 |
+
Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio,
|
| 1230 |
+
and Christopher J Pal. Deep complex networks. arXiv
|
| 1231 |
+
preprint arXiv:1705.09792, 2017. 2, 7
|
| 1232 |
+
|
| 1233 |
+
[19] Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig
|
| 1234 |
+
Adam, Alan Yuille, and Liang-Chieh Chen.
|
| 1235 |
+
Axial-
|
| 1236 |
+
deeplab: Stand-alone axial-attention for panoptic seg-
|
| 1237 |
+
mentation. In European Conference on Computer Vision,
|
| 1238 |
+
pages 108–126. Springer, 2020. 3
|
| 1239 |
+
[20] Jiasong Wu, Ling Xu, Fuzhi Wu, Youyong Kong, Lotfi
|
| 1240 |
+
Senhadji, and Huazhong Shu. Deep octonion networks.
|
| 1241 |
+
Neurocomputing, 397:179–191, 2020. 1
|
| 1242 |
+
[21] Ruyue Xin, Jiang Zhang, and Yitong Shao. Complex net-
|
| 1243 |
+
work classification with convolutional neural network.
|
| 1244 |
+
Tsinghua Science and technology, 25(4):447–457, 2020.
|
| 1245 |
+
1
|
| 1246 |
+
[22] Aston Zhang, Yi Tay, Shuai Zhang, Alvin Chan,
|
| 1247 |
+
Anh Tuan Luu, Siu Cheung Hui, and Jie Fu. Beyond
|
| 1248 |
+
fully-connected layers with quaternions: Parameteriza-
|
| 1249 |
+
tion of hypercomplex multiplications with 1/n parame-
|
| 1250 |
+
ters. arXiv preprint arXiv:2102.08597, 2021. 3
|
| 1251 |
+
|
19E3T4oBgHgl3EQfngq-/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
1dAzT4oBgHgl3EQfDfo-/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1ee1a55b107ef0baa2c6f3bfaf9d3d649e3fe3d6f54e2706f76038ca1ae167fb
|
| 3 |
+
size 3932205
|
29FST4oBgHgl3EQfYTgs/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f77ecc606b722b64c5cee732e6833a61274b9dca86e8bcdfce00e5fbded4f9ef
|
| 3 |
+
size 3997741
|
2NE4T4oBgHgl3EQfagwY/content/2301.05064v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4ad46b3db0d08b09bb4a44bb3bdc530a4eb85bf95a206b63fc4510061ef73ca6
|
| 3 |
+
size 1473776
|
2NE4T4oBgHgl3EQfagwY/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:45b58acfcda79e70a75d2a4ac9f16136845f1a34777af98eb208d748034d8711
|
| 3 |
+
size 2031661
|
2NE4T4oBgHgl3EQfagwY/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ea54ad38f832cd24183496cc976c7ad0037911202cbd9a66c6bd6fd0a7c12da5
|
| 3 |
+
size 70924
|
4tE4T4oBgHgl3EQf1A0X/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d4ba40e328a12de8e3a5d290ea3b3a3665b1c9a0122c64fdfc198494bc2e334e
|
| 3 |
+
size 2752557
|
5NAyT4oBgHgl3EQfpPjF/content/tmp_files/2301.00523v1.pdf.txt
ADDED
|
@@ -0,0 +1,1189 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Bayesian Generalized Kernel Inference for Exploration
|
| 2 |
+
of Autonomous Robots
|
| 3 |
+
Yang Xu, Student Member, IEEE, Ronghao Zheng†, Member, IEEE, Senlin Zhang, Member, IEEE,
|
| 4 |
+
and Meiqin Liu, Senior Member, IEEE
|
| 5 |
+
Abstract— This paper concerns realizing highly efficient
|
| 6 |
+
information-theoretic robot exploration with desired perfor-
|
| 7 |
+
mance in complex scenes. We build a continuous lightweight
|
| 8 |
+
inference model to predict the mutual information (MI) and
|
| 9 |
+
the associated prediction confidence of the robot’s candidate
|
| 10 |
+
actions which have not been evaluated explicitly. This allows
|
| 11 |
+
the decision-making stage in robot exploration to run with
|
| 12 |
+
a logarithmic complexity approximately, this will also benefit
|
| 13 |
+
online exploration in large unstructured, and cluttered places
|
| 14 |
+
that need more spatial samples to assess and decide. We also
|
| 15 |
+
develop an objective function to balance the local optimal action
|
| 16 |
+
with highest MI value and the global choice with high prediction
|
| 17 |
+
variance. Extensive numerical and dataset simulations show
|
| 18 |
+
the desired efficiency of our proposed method without losing
|
| 19 |
+
exploration performance in different environments. We also
|
| 20 |
+
provide our open-source implementation codes released on
|
| 21 |
+
GitHub for the robot community.
|
| 22 |
+
I. INTRODUCTION
|
| 23 |
+
Robot exploration gains its prevalence recently in pri-
|
| 24 |
+
ori unknown environments such as subterranean, marine,
|
| 25 |
+
and planetary tasks [1]–[3]. Among the literature, state-
|
| 26 |
+
of-the-art exploration methods prefer to use information-
|
| 27 |
+
theoretic metrics in each iteration, such as Shannon mutual
|
| 28 |
+
information (MI) [4] and its derivatives [5]–[8], to evaluate
|
| 29 |
+
the information gain brought by candidate control actions
|
| 30 |
+
accurately and choose and execute the most informative
|
| 31 |
+
action, thus the exploration problem becomes a sequential
|
| 32 |
+
optimal decision-making one naturally. A typical exploration
|
| 33 |
+
example is in Fig. 1.
|
| 34 |
+
Intuitively, the way to tackle this problem is to use a
|
| 35 |
+
greedy strategy and add more candidate actions, including
|
| 36 |
+
sampled nodes [9], [10], available viewpoints [11], [12],
|
| 37 |
+
or special motion primitives [13], [14], in the discrete ac-
|
| 38 |
+
tion space. However, the exploration performance of greedy
|
| 39 |
+
selection is closely related to the discrete sampling reso-
|
| 40 |
+
lution/method of action space over the map grid, i.e., a
|
| 41 |
+
coarse resolution may lead to sub-optimal actions/paths,
|
| 42 |
+
and a fine one may generate more samples and be more
|
| 43 |
+
likely to choose the optimal action, but the computational
|
| 44 |
+
cost of the information gain evaluation of all candidate
|
| 45 |
+
actions will become expensive in this case since the forward
|
| 46 |
+
1Yang Xu, Ronghao Zheng and Senlin Zhang are with the College
|
| 47 |
+
of Electrical Engineering, Zhejiang University, Hangzhou 310027, China.
|
| 48 |
+
{xuyang94,rzheng,slzhang}@zju.edu.cn
|
| 49 |
+
2Meiqin
|
| 50 |
+
Liu
|
| 51 |
+
is
|
| 52 |
+
with
|
| 53 |
+
the
|
| 54 |
+
Institute
|
| 55 |
+
of
|
| 56 |
+
Artificial
|
| 57 |
+
Intelligence
|
| 58 |
+
and
|
| 59 |
+
Robotics,
|
| 60 |
+
Xi’an
|
| 61 |
+
Jiaotong
|
| 62 |
+
University,
|
| 63 |
+
Xi’an
|
| 64 |
+
710049,
|
| 65 |
+
China.
|
| 66 |
+
liumeiqin@zju.edu.cn
|
| 67 |
+
3All authors are also with the State Key Laboratory of Industrial Control
|
| 68 |
+
Technology, Zhejiang University, Hangzhou 310027, China.
|
| 69 |
+
†Corresponding author
|
| 70 |
+
(a)
|
| 71 |
+
20
|
| 72 |
+
40
|
| 73 |
+
60
|
| 74 |
+
80
|
| 75 |
+
100
|
| 76 |
+
120
|
| 77 |
+
X(grid)
|
| 78 |
+
20
|
| 79 |
+
40
|
| 80 |
+
60
|
| 81 |
+
Y(grid)
|
| 82 |
+
0
|
| 83 |
+
0.1
|
| 84 |
+
0.2
|
| 85 |
+
0.3
|
| 86 |
+
0.4
|
| 87 |
+
0.5
|
| 88 |
+
0.6
|
| 89 |
+
(b)
|
| 90 |
+
Fig. 1.
|
| 91 |
+
MI-based active robot exploration in an unknown unstructured
|
| 92 |
+
environment. (a) Informative trajectory and the resulting occupancy map,
|
| 93 |
+
(b) Resulting MI surface. Note that the coincident yellow squares mean the
|
| 94 |
+
start and end points. A minimum information threshold is set to select more
|
| 95 |
+
informative exploration actions, e.g. the middle left and top right areas are
|
| 96 |
+
less informative than the threshold and thus unexplored. Note that the scale
|
| 97 |
+
of MI is in [0,1] bit in this paper.
|
| 98 |
+
simulation in the evaluation requires extensive raycasting and
|
| 99 |
+
MI calculation. Notably, these consequences will be more
|
| 100 |
+
distinct in 3D environments because the increased dimension
|
| 101 |
+
needs much more samples.
|
| 102 |
+
In this paper, we aim to realize a more efficient and
|
| 103 |
+
accurate approach to find the most informative action without
|
| 104 |
+
evaluating all candidate actions exhaustively and expensively
|
| 105 |
+
in robot exploration. Specifically, our main contributions are
|
| 106 |
+
three-fold:
|
| 107 |
+
1) We propose a Bayesian kernel spatial MI inference
|
| 108 |
+
method to construct a continuous surrogate evaluation model
|
| 109 |
+
between robot actions and MI values using only partial ex-
|
| 110 |
+
plicitly evaluated samples, which can perform highly efficient
|
| 111 |
+
MI prediction of control actions in logarithm time;
|
| 112 |
+
2) We develop a reward function comprising the predicted
|
| 113 |
+
MI values and uncertainties to find the best action for realiz-
|
| 114 |
+
ing the trade-off between exploration and exploitation, which
|
| 115 |
+
has been validated in numerical and dataset simulations;
|
| 116 |
+
3) Meanwhile, we release an open-source implementation
|
| 117 |
+
of our proposed method here1 for the robotics community.
|
| 118 |
+
The paper organization is as follows. Related works about
|
| 119 |
+
the recent learning-based robot exploration methods are pre-
|
| 120 |
+
sented in Section II. We formulate the problem in Section III
|
| 121 |
+
and present our Bayesian kernel-based MI inference method
|
| 122 |
+
in Section IV. Simulation results using synthetic data and
|
| 123 |
+
the real world dataset and discussions are given in Section
|
| 124 |
+
V, followed by conclusions in Section VI.
|
| 125 |
+
II. RELATED WORK
|
| 126 |
+
In the context of robot exploration, supervised learning
|
| 127 |
+
techniques provide a powerful tool to find the global op-
|
| 128 |
+
1https://github.com/Shepherd-Gregory/BKI-exploration
|
| 129 |
+
arXiv:2301.00523v1 [cs.RO] 2 Jan 2023
|
| 130 |
+
|
| 131 |
+
0.12020C.C
|
| 132 |
+
0.2(grid)
|
| 133 |
+
400.440
|
| 134 |
+
60
|
| 135 |
+
X(gric0.6
|
| 136 |
+
0.56080
|
| 137 |
+
)0.7100
|
| 138 |
+
120timum approximately by training predictive models using
|
| 139 |
+
minor parts of actions in continuous action spaces, without
|
| 140 |
+
evaluating the objective function expensively, which also has
|
| 141 |
+
better interpretability in black-box inference [15]–[17].
|
| 142 |
+
In [18], Bai et al. used the Gaussian process (GP) to model
|
| 143 |
+
the relationship between control actions and the explicitly
|
| 144 |
+
evaluated MI for the robot exploring priori unknown areas.
|
| 145 |
+
In [19], they further introduced Bayesian optimization (BO)
|
| 146 |
+
into the information-theoretic robot exploration to optimize
|
| 147 |
+
the GP prediction in multiple iterations, which provides rapid
|
| 148 |
+
map entropy reduction and ensures computational efficiency.
|
| 149 |
+
Generally, BO assumes a prior distribution on the objective
|
| 150 |
+
function and constructs predictive models to describe the
|
| 151 |
+
underlying relationship between robot actions and their MI.
|
| 152 |
+
It also assesses the acquisition function derived from the
|
| 153 |
+
GP prior and samples, then chooses the next query point
|
| 154 |
+
maximizing the acquisition function and balancing the trade-
|
| 155 |
+
off between exploration (global) and exploitation (local).
|
| 156 |
+
Iteratively, BO presents more precise results on the posterior
|
| 157 |
+
distribution as the observations (training samples) increase.
|
| 158 |
+
Rather than evaluating discrete viewpoints, Francis et al. [17]
|
| 159 |
+
modeled the autonomous exploration and mapping task as a
|
| 160 |
+
constrained BO aiming to find optimal continuous paths.
|
| 161 |
+
However, the main bottleneck of the above BO-based robot
|
| 162 |
+
exploration methods is that the number of the training actions
|
| 163 |
+
N will affect the resulting prediction accuracy directly, as
|
| 164 |
+
well as the computational cost. That implies one needs to
|
| 165 |
+
pay expensive computations to achieve higher exploration
|
| 166 |
+
performance. Typically, updating and querying the GP mod-
|
| 167 |
+
els (the engine behind BO) have an overall O(N 3) time
|
| 168 |
+
complexity. This compromises the inference efficiency and
|
| 169 |
+
real-time performance of robot exploration tasks inevitably,
|
| 170 |
+
especially in large-scale and 3D scenes.
|
| 171 |
+
More recently, deep neural networks (DNNs) have been
|
| 172 |
+
introduced to realize predicting optimal sensing actions more
|
| 173 |
+
efficiently. Bai et al. [20] trained the DNN with plenty of
|
| 174 |
+
randomly generated 2D maps to generate suggested action
|
| 175 |
+
and ensure inferring in constant time. Graph neural networks
|
| 176 |
+
(GNNs) have also been combined with reinforcement learn-
|
| 177 |
+
ing methods to learn the best action from an exploration
|
| 178 |
+
graph, rather than metric maps or visual images [21], [22].
|
| 179 |
+
Nevertheless, the neural network-based robot exploration
|
| 180 |
+
methods require numerous training samples beforehand and
|
| 181 |
+
are also limited to the adaptability and generalization ca-
|
| 182 |
+
pability in different environments, which may need further
|
| 183 |
+
studies in the future.
|
| 184 |
+
Encouragingly, the Bayesian kernel inference (BKI) tech-
|
| 185 |
+
nique proposed in [23] gives us a chance to perform ef-
|
| 186 |
+
ficient exact inference on a simplified model, rather than
|
| 187 |
+
approximating inference on an exact generative model (e.g.
|
| 188 |
+
GP) expensively. BKI extends local kernel estimation to
|
| 189 |
+
Bayesian inference for exponential family likelihood func-
|
| 190 |
+
tions, enabling only O(log Nq) (Nq: the number of querying
|
| 191 |
+
samples) run time for inference. These significant merits
|
| 192 |
+
enhance BKI’s application in robotics, including sensor un-
|
| 193 |
+
certainty estimation [24], high-speed navigation [25], as well
|
| 194 |
+
as environment mapping using sparse sensor measurements
|
| 195 |
+
such as terrain traversability mapping [26], 3D occupancy
|
| 196 |
+
mapping [27], semantic mapping [28].
|
| 197 |
+
Motivated by [19] and [23], use BKI to infer the spatial MI
|
| 198 |
+
in an efficient and closed-form way for the control actions
|
| 199 |
+
whose MI values have not been explicitly evaluated via
|
| 200 |
+
expensive computation (e.g. [4]). Our method keeps similar
|
| 201 |
+
accuracy to previous approaches compared with existing
|
| 202 |
+
works such as [18] and [19], but shows more efficient and
|
| 203 |
+
suitable performance for complex scenes requiring numerous
|
| 204 |
+
explicitly evaluated samples.
|
| 205 |
+
III. PRELIMINARIES AND NOTIONS
|
| 206 |
+
In this paper, for simplicity of discussion, we mainly
|
| 207 |
+
consider the information-theoretic exploration using a mobile
|
| 208 |
+
robot equipped with a beam-based range sensor of limited
|
| 209 |
+
field of view (FOV) in a 2D environment. The results here
|
| 210 |
+
can also be extended to 3D cases expediently.
|
| 211 |
+
A. Information-Theoretic Exploration
|
| 212 |
+
Generally, the robot generates a set of candidate actions
|
| 213 |
+
Xaction in the robot’s feasible configuration space X ⊆
|
| 214 |
+
SE(2). We also assume this configuration space has been
|
| 215 |
+
discretized by a fixed resolution over the 2D static grid map.
|
| 216 |
+
The set of values m ∈ [0, 1] is the occupancy level over the
|
| 217 |
+
independent grid cells and can be updated and queried by
|
| 218 |
+
the classic log-odds method [29]. The occupancy value of
|
| 219 |
+
an unobserved map cell ξ is assumed to be uniform, i.e.,
|
| 220 |
+
p(mξ) = 0.5.
|
| 221 |
+
Here we use the classic Shannon MI [4] as the information
|
| 222 |
+
measure of candidate configuration xi = [px
|
| 223 |
+
i , py
|
| 224 |
+
i , ψi] ∈
|
| 225 |
+
Xaction, where px
|
| 226 |
+
i and py
|
| 227 |
+
i denote the robot’s position on the
|
| 228 |
+
map, and ψi denotes the heading angle of the robot. From
|
| 229 |
+
the view of information theory, the expected information
|
| 230 |
+
gain of xi can be evaluated by the current map entropy and
|
| 231 |
+
conditional entropy given a new measurement at xi:
|
| 232 |
+
I(m; xi) = H(m) − H(m|xi).
|
| 233 |
+
(1)
|
| 234 |
+
The aim of information-theoretic robot exploration is to
|
| 235 |
+
select the best action xbest maximizing the expected MI:
|
| 236 |
+
xbest = argmax
|
| 237 |
+
x∈Xaction
|
| 238 |
+
I(m; xi).
|
| 239 |
+
(2)
|
| 240 |
+
Notably, the MI of each configuration can be decomposed
|
| 241 |
+
over independent beams and then to cells via raycasting, then
|
| 242 |
+
accumulated MI over cells to approximate, which owns a
|
| 243 |
+
squared time complexity in map resolution λm at worst [7].
|
| 244 |
+
This also brings more evaluation costs for robot exploration.
|
| 245 |
+
B. Bayesian Generalized Kernel Inference
|
| 246 |
+
Consider a supervised learning-based inference problem
|
| 247 |
+
on predictive stochastic models p(y|x) given a sequence of
|
| 248 |
+
N observations D = {(x = {xi}, y = {yi})}N
|
| 249 |
+
i=1, where x
|
| 250 |
+
and y represent the set of evaluated configurations and the
|
| 251 |
+
resulting MI values I(m; x), respectively. The main objective
|
| 252 |
+
is to infer the posterior distribution p(y∗|x∗, D) for the target
|
| 253 |
+
inputs x∗ to be evaluated. This problem can be solved by
|
| 254 |
+
associating latent parameters θ = {θi}N
|
| 255 |
+
i=1 ∈ Θ with input x
|
| 256 |
+
in the latent space Θ, where the likelihood p(y|θ) is known.
|
| 257 |
+
|
| 258 |
+
Thus the inference on y∗ can be formulated as an inference
|
| 259 |
+
on target parameters θ∗ related to x∗:
|
| 260 |
+
p(y∗|x∗, D) =
|
| 261 |
+
�
|
| 262 |
+
Θ
|
| 263 |
+
p(y∗|θ∗)p(θ∗|x∗, D)dθ∗,
|
| 264 |
+
(3)
|
| 265 |
+
where the posterior distribution of the latent variables
|
| 266 |
+
can be characterized using Bayes’ rule: p(θ∗|x∗, D) ∝
|
| 267 |
+
�
|
| 268 |
+
Θ
|
| 269 |
+
�N
|
| 270 |
+
i=1 p(yi|θi)p(θ1:N, θ∗|x1:N, x∗)dθ1:N.
|
| 271 |
+
By strongly assuming latent parameters θ1:N are con-
|
| 272 |
+
ditionally independent given the target parameters θ∗:
|
| 273 |
+
p(θ1:N, θ∗|a1:N, x∗) = �N
|
| 274 |
+
i=1 p(θi|θ∗, xi, x∗)p(θ∗|x∗), one
|
| 275 |
+
can marginalize the latent variables θ1:N and then obtain
|
| 276 |
+
p(θ∗|x∗, D) ∝ �N
|
| 277 |
+
i=1 p(yi|θ∗, xi, x∗)p(θ∗|x∗).
|
| 278 |
+
BKI further defines a distribution that has a special
|
| 279 |
+
smoothness constraint and bounded Kullback-Leibler diver-
|
| 280 |
+
gence (KLD) DKL(g||f) between the extended likelihood
|
| 281 |
+
p(yi|θ∗, xi, x∗) represented by g and the likelihood p(yi|θi)
|
| 282 |
+
represented by f, i.e., the maximum entropy distribution
|
| 283 |
+
g satisfying DKL(g||f) ≤ ρ(x∗, x) has the form g(y) ∝
|
| 284 |
+
f(y)k(x∗,x), where ρ(·, ·) : X × X → R+ is a smoothness
|
| 285 |
+
bound and k(·, ·) : X ×X → [0, 1] is a kernel function which
|
| 286 |
+
can be uniquely determined by ρ. Substituting into Eq. (3),
|
| 287 |
+
we can get:
|
| 288 |
+
p(θ∗|x∗, D) ∝
|
| 289 |
+
N
|
| 290 |
+
�
|
| 291 |
+
i=1
|
| 292 |
+
p(yi|θ∗)k(x∗,x)p(θ∗|x∗)
|
| 293 |
+
(4)
|
| 294 |
+
Thus the posterior distribution can be exactly inferred
|
| 295 |
+
by using the likelihood from the exponential family and
|
| 296 |
+
assuming the corresponding conjugate prior.
|
| 297 |
+
IV. BAYESIAN KERNEL INFERENCE FOR ROBOT
|
| 298 |
+
EXPLORATION
|
| 299 |
+
To efficiently evaluate the exact MI of unknown robot
|
| 300 |
+
configurations sampled in the spatial action space, we solve
|
| 301 |
+
this problem by a Bayesian kernel inference way.
|
| 302 |
+
A. Bayesian Kernel Spatial MI Inference
|
| 303 |
+
As mentioned in Section III.B, we assume the underlying
|
| 304 |
+
likelihood model between the MI values y and the latent
|
| 305 |
+
parameters θ follows Gaussian distribution with unknown
|
| 306 |
+
mean µ ∈ RN and fixed, known covariance Σ:
|
| 307 |
+
p(y|µ) = N(µ, Σ), Σ = diag(σ2) ∈ RN×N,
|
| 308 |
+
(5)
|
| 309 |
+
thus its conjugate prior can also be described by a Gaussian
|
| 310 |
+
distribution using the hyperparameter ζ and target samples
|
| 311 |
+
input x∗:
|
| 312 |
+
p(µ|x∗) = N
|
| 313 |
+
�
|
| 314 |
+
µ0(x∗),
|
| 315 |
+
1
|
| 316 |
+
ζ(x∗)Σ(x∗)
|
| 317 |
+
�
|
| 318 |
+
,
|
| 319 |
+
(6)
|
| 320 |
+
where µ0 and ζ are the initial belief of the mean and the
|
| 321 |
+
uncertainty of that belief, respectively. ζ = 0 means no
|
| 322 |
+
confidence and ζ → ∞ indicates full prior knowledge. Here
|
| 323 |
+
we assume ζ is a quite small positive constant since we
|
| 324 |
+
do not have much prior information about the belief when
|
| 325 |
+
exploring unknown areas.
|
| 326 |
+
Therefore, we can substitute Eq. (6) and Eq. (5) into
|
| 327 |
+
Eq. (4) given observations D:
|
| 328 |
+
p(µ∗|x∗, D) ∝
|
| 329 |
+
N
|
| 330 |
+
�
|
| 331 |
+
i=1
|
| 332 |
+
exp
|
| 333 |
+
�
|
| 334 |
+
−1
|
| 335 |
+
2
|
| 336 |
+
(yi − µi)2
|
| 337 |
+
σ2
|
| 338 |
+
k(x∗, xi)
|
| 339 |
+
�
|
| 340 |
+
(7)
|
| 341 |
+
· exp
|
| 342 |
+
�
|
| 343 |
+
−1
|
| 344 |
+
2
|
| 345 |
+
(µi − µ0)2
|
| 346 |
+
σ2
|
| 347 |
+
ζ
|
| 348 |
+
�
|
| 349 |
+
,
|
| 350 |
+
and the posterior over mean and covariance of the MI can
|
| 351 |
+
be derived as follows:
|
| 352 |
+
I(x∗) = E[y∗|x∗, D] = E[µ∗|x∗, D] = y + ζµ0
|
| 353 |
+
ζ + k
|
| 354 |
+
≃ y
|
| 355 |
+
k ,
|
| 356 |
+
σI(x∗) = V[µ∗|x∗, D] =
|
| 357 |
+
Σ
|
| 358 |
+
ζ + k ≃ Σ
|
| 359 |
+
k ,
|
| 360 |
+
(8)
|
| 361 |
+
where y and k can be computed by kernel functions:
|
| 362 |
+
k = ΣN
|
| 363 |
+
i=1k(x∗, x), y = ΣN
|
| 364 |
+
i=1k(x∗, x)yi.
|
| 365 |
+
(9)
|
| 366 |
+
Give a set of observations D evaluated explicitly as the
|
| 367 |
+
input, then we can easily compute the MI and the corre-
|
| 368 |
+
sponding confidence for the test spatial configurations x∗ by
|
| 369 |
+
using Eq. (8) and Eq. (9).
|
| 370 |
+
B. Kernel Selection
|
| 371 |
+
The kernel function of the BKI method will directly affect
|
| 372 |
+
the computational efficiency and accuracy, thus selecting
|
| 373 |
+
an appropriate kernel is quite significant. In [26]–[28], the
|
| 374 |
+
chosen sparse kernels remove the training points far away
|
| 375 |
+
from the queried points, which allows efficient and exact
|
| 376 |
+
evaluation (e.g. occupancy, traversability, semantic class)
|
| 377 |
+
over the observations in logarithm run time using k-d trees.
|
| 378 |
+
Unlike the sufficient training data obtained from onboard
|
| 379 |
+
sensors in mapping tasks, robot exploration always generates
|
| 380 |
+
and evaluates relatively fewer candidate configurations in a
|
| 381 |
+
limited space at each time instance, so there is no need to
|
| 382 |
+
reject the rare training samples in robot exploration tasks.
|
| 383 |
+
Among the exponential kernel functions, we prefer the
|
| 384 |
+
Mat´ern kernel for its capability of handling sudden transi-
|
| 385 |
+
tions of terrain [30], [31], since the potential obstacles and
|
| 386 |
+
unknown structures in application scenes that have never
|
| 387 |
+
been seen before will vary the MI values greatly. The typical
|
| 388 |
+
Mat´ern kernel function is as follows:
|
| 389 |
+
k(x∗, x) = 21−ν
|
| 390 |
+
Γ(ν) (
|
| 391 |
+
√
|
| 392 |
+
2νr
|
| 393 |
+
ℓ
|
| 394 |
+
)νKν(
|
| 395 |
+
√
|
| 396 |
+
2νr
|
| 397 |
+
ℓ
|
| 398 |
+
), r = ||x∗−x||, (10)
|
| 399 |
+
where the positive parameters ν and ℓ are the smoothness
|
| 400 |
+
constant and characteristic length scale respectively, Γ(·) and
|
| 401 |
+
Kν are the gamma and modified Bassel function, respec-
|
| 402 |
+
tively. In practice, we choose a Mat´ern 3/2 kernel (ν = 3/2)
|
| 403 |
+
with the form as k(x∗, x) = (1 +
|
| 404 |
+
√
|
| 405 |
+
3r
|
| 406 |
+
ℓ ) exp(−
|
| 407 |
+
√
|
| 408 |
+
3r
|
| 409 |
+
ℓ ).
|
| 410 |
+
C. BKI-based Robot Exploration
|
| 411 |
+
In robot exploration, we expect the robot moves toward
|
| 412 |
+
the places with high predicted MI values to maximize the
|
| 413 |
+
information gain locally, but this greedy “exploration” may
|
| 414 |
+
lead to undesired paths or even worse such as getting stuck
|
| 415 |
+
in cluttered areas. Instead, the unexplored places with high
|
| 416 |
+
predicted uncertainty are also worth exploring, since they
|
| 417 |
+
|
| 418 |
+
may guide an optimal path for the robot globally in a prior
|
| 419 |
+
unknown area, which is also characterized as “exploitation”.
|
| 420 |
+
Therefore, we integrate the prediction confidence of MI
|
| 421 |
+
values with the predicted MI to realize a trade-off between
|
| 422 |
+
the exploration and exploitation, then we can get the sug-
|
| 423 |
+
gested action maximizing the information objective function
|
| 424 |
+
based on Eq. (2) and Eq. (8):
|
| 425 |
+
xs = argmax
|
| 426 |
+
x∈Xaction
|
| 427 |
+
αI(m; xi) + (1 − α)σI(xi),
|
| 428 |
+
(11)
|
| 429 |
+
where α ∈ [0, 1] is the trade-off factor.
|
| 430 |
+
The autonomous exploration framework based on our
|
| 431 |
+
BKI MI inference method is given in Algorithm 1, where
|
| 432 |
+
Algorithm 2 is the BKI optimization module.
|
| 433 |
+
Algorithm 1 BKI Exploration( )
|
| 434 |
+
Require: Occupancy map at kth time step mk, previous
|
| 435 |
+
robot poses xhist = x0:k−1 and current pose xk, the
|
| 436 |
+
number of explicit evaluated samples N, information
|
| 437 |
+
threshold Ith, the number of querying samples Nq,
|
| 438 |
+
while-loop counts limit Nloop
|
| 439 |
+
1: iter = 0
|
| 440 |
+
2: while xhist ̸= ∅ AND iter < Nloop do
|
| 441 |
+
3:
|
| 442 |
+
iter = iter + 1
|
| 443 |
+
4:
|
| 444 |
+
// Sample N training actions
|
| 445 |
+
5:
|
| 446 |
+
x ← Sampling(xk, mk, N);
|
| 447 |
+
6:
|
| 448 |
+
// Evaluate these actions explicitly Eq. (1)
|
| 449 |
+
7:
|
| 450 |
+
for each xi ∈ x do
|
| 451 |
+
8:
|
| 452 |
+
mvirtual ← Raycasting(xi, mk);
|
| 453 |
+
9:
|
| 454 |
+
Ii ← ComputeMI(mvirtual);
|
| 455 |
+
10:
|
| 456 |
+
y ← y ∪ Ii;
|
| 457 |
+
11:
|
| 458 |
+
end for
|
| 459 |
+
12:
|
| 460 |
+
x∗ ← Sampling(xk, mk, Nq);
|
| 461 |
+
13:
|
| 462 |
+
// Find the suggested action using Algorithm 2
|
| 463 |
+
14:
|
| 464 |
+
{xbest, Ibest} ← BKIOptimization({x, y}, x∗);
|
| 465 |
+
15:
|
| 466 |
+
if max(Ibest) > Ith then
|
| 467 |
+
16:
|
| 468 |
+
xk+1 ← xbest(MaxInfoIndex);
|
| 469 |
+
17:
|
| 470 |
+
xhist ← xhist ∪ xk+1;
|
| 471 |
+
18:
|
| 472 |
+
else
|
| 473 |
+
19:
|
| 474 |
+
xk+1 ← xk−1; // Back to previous action
|
| 475 |
+
20:
|
| 476 |
+
Remove xk−1 from xhist;
|
| 477 |
+
21:
|
| 478 |
+
end if
|
| 479 |
+
22:
|
| 480 |
+
// Execute the action and update the map
|
| 481 |
+
23:
|
| 482 |
+
Plocal ← Astar(xk, xk+1) // Plan local path by A*
|
| 483 |
+
24:
|
| 484 |
+
mk+1 ← OccupancyGridMapping(Plocal);
|
| 485 |
+
25: end while
|
| 486 |
+
Proposition 1 The time complexity of our proposed method
|
| 487 |
+
at each while-loop step in Algorithm 1 is:
|
| 488 |
+
O(NNzN 2
|
| 489 |
+
c )
|
| 490 |
+
�
|
| 491 |
+
��
|
| 492 |
+
�
|
| 493 |
+
explicit MI evaluation
|
| 494 |
+
+
|
| 495 |
+
O(NepochN log Nq)
|
| 496 |
+
�
|
| 497 |
+
��
|
| 498 |
+
�
|
| 499 |
+
BKI MI inference
|
| 500 |
+
(12)
|
| 501 |
+
where Nepoch is the number of training epoch, Nz and Nc
|
| 502 |
+
are the numbers of beams per sensor scan, and the number
|
| 503 |
+
of cells that a beam intersects with the grid map at worst,
|
| 504 |
+
respectively.
|
| 505 |
+
Algorithm 2 BKI Optimization( )
|
| 506 |
+
Require: Training set D = {(xi, yi)}N
|
| 507 |
+
i=1, current action set
|
| 508 |
+
to be evaluated x∗, training epoch Nepoch, factor α
|
| 509 |
+
1: xbest ← {}, Ibest ← {};
|
| 510 |
+
2: for each epoch do
|
| 511 |
+
3:
|
| 512 |
+
// Compute the kernel function using Eq. (11)
|
| 513 |
+
4:
|
| 514 |
+
k ← KernelFunction(x∗, x);
|
| 515 |
+
5:
|
| 516 |
+
// Compute MI and uncertainty using Eq. (8)
|
| 517 |
+
6:
|
| 518 |
+
k ← Σk, y ← k · y;
|
| 519 |
+
7:
|
| 520 |
+
I∗ ← y/k, σ∗
|
| 521 |
+
I ← Σ/k;
|
| 522 |
+
8:
|
| 523 |
+
ObjFunc ← αI∗ + (1 − α)σ∗
|
| 524 |
+
I;
|
| 525 |
+
9:
|
| 526 |
+
xs = max(ObjFunc);
|
| 527 |
+
10:
|
| 528 |
+
if xs ∈ x then
|
| 529 |
+
11:
|
| 530 |
+
xbest ← xbest ∪ xs, Ibest ← Ibest ∪ ys
|
| 531 |
+
12:
|
| 532 |
+
else
|
| 533 |
+
13:
|
| 534 |
+
// Evaluate MI explicitly using Eq. (1)
|
| 535 |
+
14:
|
| 536 |
+
Is = CalculateMI(xs);
|
| 537 |
+
15:
|
| 538 |
+
// Add into D
|
| 539 |
+
16:
|
| 540 |
+
xbest ← xbest ∪ xs, x ← x ∪ xs;
|
| 541 |
+
17:
|
| 542 |
+
Ibest ← Ibest ∪ Is, y ← y ∪ Is;
|
| 543 |
+
18:
|
| 544 |
+
end if
|
| 545 |
+
19: end for
|
| 546 |
+
20: return xbest, Ibest
|
| 547 |
+
Significantly, the GP-based robot exploration in [18] and
|
| 548 |
+
BO-based method in [19] have the same time cost of ours in
|
| 549 |
+
explicit MI evaluation, but these two methods have computa-
|
| 550 |
+
tional complexities of O(N 3 +N 2Nq) and O(Nepoch(N 3 +
|
| 551 |
+
N 2Nq)) to perform the expensive GP inference for MI,
|
| 552 |
+
respectively. This comparative theoretic result indicates our
|
| 553 |
+
BKI-based exploration method outperforms the GP-based
|
| 554 |
+
methods in time efficiency, especially in large-scale and
|
| 555 |
+
cluttered places which need more samples N and Nq to
|
| 556 |
+
evaluate rapidly.
|
| 557 |
+
V. RESULTS AND DISCUSSIONS
|
| 558 |
+
In this section, we run numerical simulations and dataset
|
| 559 |
+
experiments on a desktop PC with a 3.6 GHz Intel i3-
|
| 560 |
+
9100F CPU and 32G RAM to verify the effectiveness of
|
| 561 |
+
proposed BKI-based robot exploration method. The infor-
|
| 562 |
+
mation threshold is Ith = 0.05 bit and the trade-off factor is
|
| 563 |
+
α = 0.5. We adopt a Mat´ern kernel for GP and the kernel
|
| 564 |
+
parameters are ℓ = 1 and ν = 3/2 for all simulations. We
|
| 565 |
+
also choose the parameters of ζ = 0.001 and σ = 0.01 for
|
| 566 |
+
BKI method. The robot poses are assumed to be known and
|
| 567 |
+
the robot’s candidate actions are sampled uniformly in the
|
| 568 |
+
FOV of range sensors. We conduct 20 Monte Carlo trials for
|
| 569 |
+
all maps.
|
| 570 |
+
We use greedy-based optimization (named “NBO” in
|
| 571 |
+
simulations), batch GP with only 1 epoch for optimization
|
| 572 |
+
(“bacth GP”) [18], and GP-based BO with multiple epochs
|
| 573 |
+
(“GP-BO”) [19] to compare with our methods, one named
|
| 574 |
+
“bacth BKI” with only 1 optimization epoch and another
|
| 575 |
+
one named “BKI-BO” with multiple epochs. Meanwhile, to
|
| 576 |
+
validate the time efficiencies, we apply 2 cases of N = 30
|
| 577 |
+
and N = 60 samples for each method, where GP-BO 30
|
| 578 |
+
|
| 579 |
+
Informative trajectory in occupancy map
|
| 580 |
+
20
|
| 581 |
+
40
|
| 582 |
+
60
|
| 583 |
+
80
|
| 584 |
+
100
|
| 585 |
+
120
|
| 586 |
+
X(grid)
|
| 587 |
+
10
|
| 588 |
+
20
|
| 589 |
+
30
|
| 590 |
+
40
|
| 591 |
+
50
|
| 592 |
+
60
|
| 593 |
+
70
|
| 594 |
+
Y(grid)
|
| 595 |
+
0.1
|
| 596 |
+
0.2
|
| 597 |
+
0.3
|
| 598 |
+
0.4
|
| 599 |
+
0.5
|
| 600 |
+
0.6
|
| 601 |
+
0.7
|
| 602 |
+
(a) Informative trajectory
|
| 603 |
+
MI surface
|
| 604 |
+
20
|
| 605 |
+
40
|
| 606 |
+
60
|
| 607 |
+
80
|
| 608 |
+
100
|
| 609 |
+
120
|
| 610 |
+
X(grid)
|
| 611 |
+
10
|
| 612 |
+
20
|
| 613 |
+
30
|
| 614 |
+
40
|
| 615 |
+
50
|
| 616 |
+
60
|
| 617 |
+
70
|
| 618 |
+
Y(grid)
|
| 619 |
+
0.2
|
| 620 |
+
0.3
|
| 621 |
+
0.4
|
| 622 |
+
0.5
|
| 623 |
+
0.6
|
| 624 |
+
(b) MI surface
|
| 625 |
+
Fig. 2.
|
| 626 |
+
An example of BKI-based robot exploration in an unknown
|
| 627 |
+
structured environment. Yellow square: start point; yellow star: end point;
|
| 628 |
+
red line: robot direction at each action.
|
| 629 |
+
and BKI-BO 30 use Nepoch = 15 iterations, GP-BO 60 and
|
| 630 |
+
BKI-BO 60 use 30 epochs in BKI optimization. We also set
|
| 631 |
+
Nq = 8N in all simulations.
|
| 632 |
+
A. Synthetic Environments Results
|
| 633 |
+
To simulate the indoor and field scenes, we generate 2
|
| 634 |
+
24 m × 14 m synthetic maps, one structured maze map
|
| 635 |
+
surrounded by several walls (shown in Fig. 2, Nloop = 50),
|
| 636 |
+
and one unstructured map consisting of circles and ellipses
|
| 637 |
+
(shown in Fig. 1, Nloop = 150). The map resolutions are both
|
| 638 |
+
0.2 m. The simulated range sensor has a FOV of ±1.5 rad
|
| 639 |
+
with a resolution of 0.05 rad, and a maximum sensing range
|
| 640 |
+
of 6 m. The robot is initially at [1.2 m, 1.2 m] with 0 rad
|
| 641 |
+
heading and trying to explore the prior unknown map. The
|
| 642 |
+
representative resulting paths maximizing the information
|
| 643 |
+
objective function are in Fig. 1 and Fig. 2.
|
| 644 |
+
The qualitative results of structured and unstructured maps
|
| 645 |
+
are shown in Fig. 3 and Fig. 4, respectively. To compare the
|
| 646 |
+
exploration performance using different methods intuitively,
|
| 647 |
+
we present the evolution of map entropy and coverage rate of
|
| 648 |
+
each method in the figures, where the solid and dashed lines
|
| 649 |
+
depict the means of Monte Carlo trials for each method, and
|
| 650 |
+
the shaded regions represent the standard deviations.
|
| 651 |
+
Fig. 3 shows the BKI and GP methods have similar
|
| 652 |
+
performance to the NBO methods since this structured scene
|
| 653 |
+
is relatively small and simple, especially in the beginning
|
| 654 |
+
stage where there is only one corridor to move forward.
|
| 655 |
+
Differently, Fig. 4 indicates that the NBO methods spend
|
| 656 |
+
more time (about 50∼70 steps) to converge and end the
|
| 657 |
+
exploration, while BKI and GP methods complete the ex-
|
| 658 |
+
ploration with comparable entropy reduction and coverage
|
| 659 |
+
rates to NBOs.
|
| 660 |
+
Moreover, as in Fig. 5, we use the explicitly evaluated MI
|
| 661 |
+
as the ground truth and compute the MI prediction errors
|
| 662 |
+
using BKI-BO and GP-BO methods with small training
|
| 663 |
+
samples in a randomly selected step, which implies the
|
| 664 |
+
BKI-based approach can resemble the GP-based one in MI
|
| 665 |
+
inference accuracy when facing challenging cases.
|
| 666 |
+
In short, these results validate that our BKI methods have
|
| 667 |
+
competitive properties with GP-based exploration ones in the
|
| 668 |
+
typical structured and unstructured scenes.
|
| 669 |
+
B. Dataset Results
|
| 670 |
+
To test our method in a more complex environment, we
|
| 671 |
+
choose the Seattle map [32] containing narrow long corridors
|
| 672 |
+
0
|
| 673 |
+
10
|
| 674 |
+
20
|
| 675 |
+
30
|
| 676 |
+
40
|
| 677 |
+
50
|
| 678 |
+
60
|
| 679 |
+
Steps
|
| 680 |
+
2000
|
| 681 |
+
2200
|
| 682 |
+
2400
|
| 683 |
+
2600
|
| 684 |
+
2800
|
| 685 |
+
3000
|
| 686 |
+
Map entropy (bits)
|
| 687 |
+
NBO 30
|
| 688 |
+
NBO 60
|
| 689 |
+
GP-BO 30
|
| 690 |
+
GP-BO 60
|
| 691 |
+
BKI-BO 30
|
| 692 |
+
BKI-BO 60
|
| 693 |
+
(a) Map entropy
|
| 694 |
+
0
|
| 695 |
+
10
|
| 696 |
+
20
|
| 697 |
+
30
|
| 698 |
+
40
|
| 699 |
+
50
|
| 700 |
+
60
|
| 701 |
+
Steps
|
| 702 |
+
2000
|
| 703 |
+
2200
|
| 704 |
+
2400
|
| 705 |
+
2600
|
| 706 |
+
2800
|
| 707 |
+
3000
|
| 708 |
+
Map entropy (bits)
|
| 709 |
+
NBO 30
|
| 710 |
+
NBO 60
|
| 711 |
+
batch GP 30
|
| 712 |
+
batch GP 60
|
| 713 |
+
batch BKI 30
|
| 714 |
+
batch BKI 60
|
| 715 |
+
(b) Map entropy
|
| 716 |
+
0
|
| 717 |
+
10
|
| 718 |
+
20
|
| 719 |
+
30
|
| 720 |
+
40
|
| 721 |
+
50
|
| 722 |
+
60
|
| 723 |
+
Steps
|
| 724 |
+
0
|
| 725 |
+
0.1
|
| 726 |
+
0.2
|
| 727 |
+
0.3
|
| 728 |
+
0.4
|
| 729 |
+
0.5
|
| 730 |
+
0.6
|
| 731 |
+
0.7
|
| 732 |
+
0.8
|
| 733 |
+
Coverage
|
| 734 |
+
NBO 30
|
| 735 |
+
NBO 60
|
| 736 |
+
GP-BO 30
|
| 737 |
+
GP-BO 60
|
| 738 |
+
BKI-BO 30
|
| 739 |
+
BKI-BO 60
|
| 740 |
+
(c) Coverage
|
| 741 |
+
0
|
| 742 |
+
10
|
| 743 |
+
20
|
| 744 |
+
30
|
| 745 |
+
40
|
| 746 |
+
50
|
| 747 |
+
60
|
| 748 |
+
Steps
|
| 749 |
+
0
|
| 750 |
+
0.1
|
| 751 |
+
0.2
|
| 752 |
+
0.3
|
| 753 |
+
0.4
|
| 754 |
+
0.5
|
| 755 |
+
0.6
|
| 756 |
+
0.7
|
| 757 |
+
0.8
|
| 758 |
+
Coverage
|
| 759 |
+
NBO 30
|
| 760 |
+
NBO 60
|
| 761 |
+
batch GP 30
|
| 762 |
+
batch GP 60
|
| 763 |
+
batch BKI 30
|
| 764 |
+
batch BKI 60
|
| 765 |
+
(d) Coverage
|
| 766 |
+
Fig. 3.
|
| 767 |
+
Map entropy and coverage results of the synthetic structured map.
|
| 768 |
+
and cluttered rooms, as in Fig. 6. The map size is 24 m×14 m
|
| 769 |
+
with a resolution of 0.2 m. We use a simulated laser scanner
|
| 770 |
+
emitting 20 beams uniformly within a FOV of ±π/3 rad at
|
| 771 |
+
a maximum range of 4 m. The robot starts at [13, 57]m with
|
| 772 |
+
a −π/2 initial heading angle. The Nloop is set to 100.
|
| 773 |
+
Fig. 7 presents the comparative curves of map entropy
|
| 774 |
+
and coverage rates, whereas Fig. 7(a) shows the BKI-BO
|
| 775 |
+
methods have more rapid reduction rates of map entropy
|
| 776 |
+
after the exploration starts and arrive at relatively lower levels
|
| 777 |
+
than other methods, among them, BKI-BO 60 performs the
|
| 778 |
+
best. In this typical cluttered map, GP-BO methods perform
|
| 779 |
+
slightly inferior to our BKI-BO methods but almost catch
|
| 780 |
+
up with ours, which also are much better than the NBO
|
| 781 |
+
methods. The curves in Fig. 7(b) imply that batch GP and
|
| 782 |
+
batch BKI have similar performance. We also can get an
|
| 783 |
+
insight from Fig. 7(c) and (d), i.e., the coverage curves
|
| 784 |
+
of BKI-BO methods converge slightly earlier than GP-BO
|
| 785 |
+
methods and reach higher values, and all BO-based methods
|
| 786 |
+
explore the unknown place much faster than the NBO ones.
|
| 787 |
+
This result evidences our BKI methods are more suitable for
|
| 788 |
+
large cluttered environments.
|
| 789 |
+
C. Time Efficiency
|
| 790 |
+
We have presented the exploration results in the previous
|
| 791 |
+
simulations of typical scenes, and our BKI-based method
|
| 792 |
+
has shown desired exploration performance in efficiency and
|
| 793 |
+
accuracy compared with state-of-the-art methods. To put
|
| 794 |
+
more intuitive and specific comparison, we further analyze
|
| 795 |
+
the time cost of each method per exploration step in all
|
| 796 |
+
maps. As in Table I, the results show the time cost of the
|
| 797 |
+
whole exploration process per step in the form of means
|
| 798 |
+
and standard deviations, as well as the average percent
|
| 799 |
+
of evaluation and decision-making time spent by different
|
| 800 |
+
methods in each step.
|
| 801 |
+
|
| 802 |
+
0
|
| 803 |
+
50
|
| 804 |
+
100
|
| 805 |
+
150
|
| 806 |
+
Steps
|
| 807 |
+
1800
|
| 808 |
+
2000
|
| 809 |
+
2200
|
| 810 |
+
2400
|
| 811 |
+
2600
|
| 812 |
+
2800
|
| 813 |
+
3000
|
| 814 |
+
Map entropy (bits)
|
| 815 |
+
NBO 30
|
| 816 |
+
NBO 60
|
| 817 |
+
GP-BO 30
|
| 818 |
+
GP-BO 60
|
| 819 |
+
BKI-BO 30
|
| 820 |
+
BKI-BO 60
|
| 821 |
+
(a)
|
| 822 |
+
0
|
| 823 |
+
50
|
| 824 |
+
100
|
| 825 |
+
150
|
| 826 |
+
Steps
|
| 827 |
+
1800
|
| 828 |
+
2000
|
| 829 |
+
2200
|
| 830 |
+
2400
|
| 831 |
+
2600
|
| 832 |
+
2800
|
| 833 |
+
3000
|
| 834 |
+
Map entropy (bits)
|
| 835 |
+
NBO 30
|
| 836 |
+
NBO 60
|
| 837 |
+
batch GP 30
|
| 838 |
+
batch GP 60
|
| 839 |
+
batch BKI 30
|
| 840 |
+
batch BKI 60
|
| 841 |
+
(b)
|
| 842 |
+
0
|
| 843 |
+
50
|
| 844 |
+
100
|
| 845 |
+
150
|
| 846 |
+
Steps
|
| 847 |
+
0
|
| 848 |
+
0.2
|
| 849 |
+
0.4
|
| 850 |
+
0.6
|
| 851 |
+
0.8
|
| 852 |
+
1
|
| 853 |
+
Coverage
|
| 854 |
+
NBO 30
|
| 855 |
+
NBO 60
|
| 856 |
+
GP-BO 30
|
| 857 |
+
GP-BO 60
|
| 858 |
+
BKI-BO 30
|
| 859 |
+
BKI-BO 60
|
| 860 |
+
(c)
|
| 861 |
+
0
|
| 862 |
+
50
|
| 863 |
+
100
|
| 864 |
+
150
|
| 865 |
+
Steps
|
| 866 |
+
0
|
| 867 |
+
0.2
|
| 868 |
+
0.4
|
| 869 |
+
0.6
|
| 870 |
+
0.8
|
| 871 |
+
1
|
| 872 |
+
Coverage
|
| 873 |
+
NBO 30
|
| 874 |
+
NBO 60
|
| 875 |
+
batch GP 30
|
| 876 |
+
batch GP 60
|
| 877 |
+
batch BKI 30
|
| 878 |
+
batch BKI 60
|
| 879 |
+
(d)
|
| 880 |
+
Fig. 4.
|
| 881 |
+
Map entropy and coverage results of the synthetic unstructured map.
|
| 882 |
+
TABLE I
|
| 883 |
+
TIME COST COMPARISON OF DIFFERENT EXPLORATION METHODS
|
| 884 |
+
Methods
|
| 885 |
+
Synthetic structured map
|
| 886 |
+
Synthetic unstructured map
|
| 887 |
+
Seattle map [32]
|
| 888 |
+
NBO 30
|
| 889 |
+
95.29% / 10.4455 ± 0.9409
|
| 890 |
+
96% / 12.1683 ± 1.3856
|
| 891 |
+
96.95% / 4.8434 ± 0.7311
|
| 892 |
+
NBO 60
|
| 893 |
+
95.38% / 10.9967 ± 1.0676
|
| 894 |
+
95.93% / 12.4971 ± 2.1583
|
| 895 |
+
96.93% / 5.3502 ± 0.9009
|
| 896 |
+
batch GP 30
|
| 897 |
+
5.15% / 0.4387 ± 0.0246
|
| 898 |
+
4.66% / 0.2805 ± 0.0232
|
| 899 |
+
12.68% / 0.2134 ± 0.0169
|
| 900 |
+
batch GP 60
|
| 901 |
+
6.44% / 0.4444 ± 0.0487
|
| 902 |
+
5.89% / 0.3021 ± 0.0362
|
| 903 |
+
14.94% / 0.2291 ± 0.0226
|
| 904 |
+
batch BKI 30 (ours)
|
| 905 |
+
3.05 / 0.4324 ± 0.0276
|
| 906 |
+
2.93% / 0.2485 ± 0.0346
|
| 907 |
+
7.67% / 0.2036 ± 0.0254
|
| 908 |
+
batch BKI 60 (ours)
|
| 909 |
+
3.87% / 0.4407 ± 0.0384
|
| 910 |
+
3.56% / 0.2731 ± 0.0356
|
| 911 |
+
9.05% / 0.2065 ± 0.0229
|
| 912 |
+
GP-BO 30
|
| 913 |
+
49.71% / 0.9435 ± 0.0609
|
| 914 |
+
48.09% / 0.6083 ± 0.1121
|
| 915 |
+
67.97% / 0.5203 ± 0.0554
|
| 916 |
+
GP-BO 60
|
| 917 |
+
74.03% / 1.8265 ± 0.1189
|
| 918 |
+
72.99% / 1.3558 ± 0.1190
|
| 919 |
+
84.26% / 1.0528 ± 0.1124
|
| 920 |
+
BKI-BO 30 (ours)
|
| 921 |
+
39% / 0.7518 ± 0.0683
|
| 922 |
+
39.14% / 0.514 ± 0.0966 30
|
| 923 |
+
54.03% / 0.3903 ± 0.1175
|
| 924 |
+
BKI-BO 60 (ours)
|
| 925 |
+
53.74% / 0.9952 ± 0.1061
|
| 926 |
+
54.31% / 0.7363 ± 0.1186
|
| 927 |
+
62.45% / 0.4955 ± 0.1775
|
| 928 |
+
Note: Time cost of inference per step (in percentage) / Total time cost of exploration per step of each method (in sec.)
|
| 929 |
+
0
|
| 930 |
+
100
|
| 931 |
+
200
|
| 932 |
+
300
|
| 933 |
+
400
|
| 934 |
+
500
|
| 935 |
+
-5
|
| 936 |
+
0
|
| 937 |
+
5
|
| 938 |
+
MI Error (bits)
|
| 939 |
+
BKI prediction
|
| 940 |
+
GP prediction
|
| 941 |
+
Fig. 5.
|
| 942 |
+
A challenging example of MI prediction error comparison using
|
| 943 |
+
BKI and GP methods trained with fewer samples in a randomly selected
|
| 944 |
+
exploration step.
|
| 945 |
+
(a) Exploration trajectory
|
| 946 |
+
50
|
| 947 |
+
100
|
| 948 |
+
150
|
| 949 |
+
200
|
| 950 |
+
250
|
| 951 |
+
X(grid)
|
| 952 |
+
20
|
| 953 |
+
40
|
| 954 |
+
60
|
| 955 |
+
80
|
| 956 |
+
100
|
| 957 |
+
Y(grid)
|
| 958 |
+
0
|
| 959 |
+
0.2
|
| 960 |
+
0.4
|
| 961 |
+
0.6
|
| 962 |
+
(b) MI surface
|
| 963 |
+
Fig. 6.
|
| 964 |
+
An example of BKI-based robot exploration in the large cluttered
|
| 965 |
+
Seattle map [32]. White square: start point; White star: end point.
|
| 966 |
+
Among the 10 methods, the basic NBO methods have the
|
| 967 |
+
most expensive time consumption (more than about 8∼50
|
| 968 |
+
times to BKI and GP methods) per step, while other methods
|
| 969 |
+
based on GP and BKI cost much less time, showing the
|
| 970 |
+
efficiency of Bayesian optimization-based approaches. We
|
| 971 |
+
can further analyze these results from 2 aspects of view.
|
| 972 |
+
From the top row to the bottom, our BKI-based methods get
|
| 973 |
+
better time efficiency performance of decision-making and
|
| 974 |
+
inference than the corresponding GP-based ones in all maps
|
| 975 |
+
0
|
| 976 |
+
20
|
| 977 |
+
40
|
| 978 |
+
60
|
| 979 |
+
80
|
| 980 |
+
100
|
| 981 |
+
Steps
|
| 982 |
+
1.22
|
| 983 |
+
1.23
|
| 984 |
+
1.24
|
| 985 |
+
1.25
|
| 986 |
+
1.26
|
| 987 |
+
1.27
|
| 988 |
+
1.28
|
| 989 |
+
1.29
|
| 990 |
+
1.3
|
| 991 |
+
Map entropy (bits)
|
| 992 |
+
104
|
| 993 |
+
NBO 30
|
| 994 |
+
NBO 60
|
| 995 |
+
GP-BO 30
|
| 996 |
+
GP-BO 60
|
| 997 |
+
BKI-BO 30
|
| 998 |
+
BKI-BO 60
|
| 999 |
+
(a) Map entropy rate
|
| 1000 |
+
0
|
| 1001 |
+
20
|
| 1002 |
+
40
|
| 1003 |
+
60
|
| 1004 |
+
80
|
| 1005 |
+
100
|
| 1006 |
+
Steps
|
| 1007 |
+
0
|
| 1008 |
+
0.02
|
| 1009 |
+
0.04
|
| 1010 |
+
0.06
|
| 1011 |
+
0.08
|
| 1012 |
+
0.1
|
| 1013 |
+
0.12
|
| 1014 |
+
0.14
|
| 1015 |
+
0.16
|
| 1016 |
+
Coverage
|
| 1017 |
+
NBO 30
|
| 1018 |
+
NBO 60
|
| 1019 |
+
GP-BO 30
|
| 1020 |
+
GP-BO 60
|
| 1021 |
+
BKI-BO 30
|
| 1022 |
+
BKI-BO 60
|
| 1023 |
+
(b) Coverage rate
|
| 1024 |
+
Fig. 7.
|
| 1025 |
+
Map entropy and coverage results of the Seattle map results (batch
|
| 1026 |
+
methods omitted).
|
| 1027 |
+
when the number of samples increases, e.g. batch BKI 30/60
|
| 1028 |
+
vs batch GP 30/60 and BKI-BO 30/60 vs GP-BO 30/60. We
|
| 1029 |
+
also can observe that BKI methods run faster than GP ones
|
| 1030 |
+
when using more training epochs. BKI methods also bring
|
| 1031 |
+
significant time savings for exploration, such as decreasing
|
| 1032 |
+
by about 20% and 45% time compared with GP-BO 30 and
|
| 1033 |
+
GP-BO 60 respectively in the structured map.
|
| 1034 |
+
From the left column to the right, these above-mentioned
|
| 1035 |
+
differences get more distinct in unstructured and large clut-
|
| 1036 |
+
tered maps, e.g. the time costs per step of GP-BO 30 and GP-
|
| 1037 |
+
BO 60 decrease by about 25% and 53% in the Seattle map
|
| 1038 |
+
respectively, which also verifies our proposed BKI-based
|
| 1039 |
+
robot exploration methods can improve the time efficiency
|
| 1040 |
+
considerably without losing overall exploration performance
|
| 1041 |
+
compared with other methods.
|
| 1042 |
+
|
| 1043 |
+
0.140H50
|
| 1044 |
+
10C.C
|
| 1045 |
+
0.2(grid
|
| 1046 |
+
600.4800
|
| 1047 |
+
150
|
| 1048 |
+
X(gri0.6
|
| 1049 |
+
0.5100200
|
| 1050 |
+
d)0.725020VI. CONCLUSIONS
|
| 1051 |
+
This paper mainly contributed to a new efficient learning-
|
| 1052 |
+
based approach for information-theoretic robot exploration in
|
| 1053 |
+
unknown environments. In particular, a continuous informa-
|
| 1054 |
+
tion gain evaluation model for predicting the MI of numerous
|
| 1055 |
+
sampled robot actions is built by introducing the Bayesian
|
| 1056 |
+
kernel inference method. The time complexity of MI pre-
|
| 1057 |
+
diction is decreased to logarithm level in comparison with
|
| 1058 |
+
state-of-the-art methods. An objective function integrating
|
| 1059 |
+
the predicted MI and uncertainty is also designed to balance
|
| 1060 |
+
exploration and exploitation. The proposed method also
|
| 1061 |
+
gets verified under an autonomous exploration framework
|
| 1062 |
+
by extensive simulations of different scenes, which reveals
|
| 1063 |
+
our method outperforms the greedy-based and GP-based
|
| 1064 |
+
exploration methods overall in efficiency without loss of
|
| 1065 |
+
exploration performance, especially in unstructured and large
|
| 1066 |
+
cluttered scenes. Future work mainly involves studying the
|
| 1067 |
+
exploration performance using different α values and kernels,
|
| 1068 |
+
as well as extending our method to 3D scenes.
|
| 1069 |
+
REFERENCES
|
| 1070 |
+
[1] H. Azp´urua, M. F. M. Campos, and D. G. Macharet, “Three-
|
| 1071 |
+
dimensional terrain aware autonomous exploration for subterranean
|
| 1072 |
+
and confined spaces,” in 2021 IEEE International Conference on
|
| 1073 |
+
Robotics and Automation (ICRA).
|
| 1074 |
+
IEEE, 2021, pp. 2443–2449.
|
| 1075 |
+
[2] J. Strader, K. Otsu, and A.-a. Agha-mohammadi, “Perception-aware
|
| 1076 |
+
autonomous mast motion planning for planetary exploration rovers,”
|
| 1077 |
+
Journal of Field Robotics, vol. 37, no. 5, pp. 812–829, 2020.
|
| 1078 |
+
[3] P. Stankiewicz, Y. T. Tan, and M. Kobilarov, “Adaptive sampling with
|
| 1079 |
+
an autonomous underwater vehicle in static marine environments,”
|
| 1080 |
+
Journal of Field Robotics, vol. 38, no. 4, pp. 572–597, 2021.
|
| 1081 |
+
[4] B. J. Julian, S. Karaman, and D. Rus, “On mutual information-
|
| 1082 |
+
based control of range sensing robots for mapping applications,” The
|
| 1083 |
+
International Journal of Robotics Research, vol. 33, no. 10, pp. 1375–
|
| 1084 |
+
1392, 2014.
|
| 1085 |
+
[5] B. Charrow, S. Liu, V. Kumar, and N. Michael, “Information-theoretic
|
| 1086 |
+
mapping using cauchy-schwarz quadratic mutual information,” in 2015
|
| 1087 |
+
IEEE International Conference on Robotics and Automation (ICRA).
|
| 1088 |
+
IEEE, 2015, pp. 4791–4798.
|
| 1089 |
+
[6] Z. Zhang, T. Henderson, S. Karaman, and V. Sze, “FSMI: Fast
|
| 1090 |
+
computation of shannon mutual information for information-theoretic
|
| 1091 |
+
mapping,” The International Journal of Robotics Research, vol. 39,
|
| 1092 |
+
no. 9, pp. 1155–1177, 2020.
|
| 1093 |
+
[7] Y. Xu, R. Zheng, M. Liu, and S. Zhang, “CRMI: Confidence-rich
|
| 1094 |
+
mutual information for information-theoretic mapping,” IEEE Robotics
|
| 1095 |
+
and Automation Letters, vol. 6, no. 4, pp. 6434–6441, 2021.
|
| 1096 |
+
[8] Y. Xu, R. Zheng, S. Zhang, and M. Liu, “Confidence-rich localization
|
| 1097 |
+
and mapping based on particle filter for robotic exploration,” in 2022
|
| 1098 |
+
IEEE/RSJ International Conference on Intelligent Robots and Systems
|
| 1099 |
+
(IROS).
|
| 1100 |
+
IEEE, 2022, pp. 1–7.
|
| 1101 |
+
[9] G. A. Hollinger and G. S. Sukhatme, “Sampling-based robotic infor-
|
| 1102 |
+
mation gathering algorithms,” The International Journal of Robotics
|
| 1103 |
+
Research, vol. 33, no. 9, pp. 1271–1287, 2014.
|
| 1104 |
+
[10] M. G. Jadidi, J. V. Mir´o, and G. Dissanayake, “Sampling-based incre-
|
| 1105 |
+
mental information gathering with applications to robotic exploration
|
| 1106 |
+
and environmental monitoring,” The International Journal of Robotics
|
| 1107 |
+
Research, vol. 38, no. 6, pp. 658–685, 2019.
|
| 1108 |
+
[11] A. Bircher, M. Kamel, K. Alexis, H. Oleynikova, and R. Siegwart,
|
| 1109 |
+
“Receding horizon” next-best-view” planner for 3d exploration,” in
|
| 1110 |
+
2016 IEEE International Conference on Robotics and Automation
|
| 1111 |
+
(ICRA).
|
| 1112 |
+
IEEE, 2016, pp. 1462–1468.
|
| 1113 |
+
[12] B. Charrow, V. Kumar, and N. Michael, “Approximate representations
|
| 1114 |
+
for multi-robot control policies that maximize mutual information,”
|
| 1115 |
+
Autonomous Robots, vol. 37, no. 4, pp. 383–400, 2014.
|
| 1116 |
+
[13] K. Yang, S. Keat Gan, and S. Sukkarieh, “A gaussian process-based rrt
|
| 1117 |
+
planner for the exploration of an unknown and cluttered environment
|
| 1118 |
+
with a uav,” Advanced Robotics, vol. 27, no. 6, pp. 431–443, 2013.
|
| 1119 |
+
[14] B. Charrow, G. Kahn, S. Patil, S. Liu, K. Goldberg, P. Abbeel,
|
| 1120 |
+
N. Michael, and V. Kumar, “Information-theoretic planning with
|
| 1121 |
+
trajectory optimization for dense 3d mapping.” in Robotics: Science
|
| 1122 |
+
and Systems, vol. 11, 2015, pp. 3–12.
|
| 1123 |
+
[15] R. Marchant and F. Ramos, “Bayesian optimisation for informative
|
| 1124 |
+
continuous path planning,” in 2014 IEEE International Conference on
|
| 1125 |
+
Robotics and Automation (ICRA).
|
| 1126 |
+
IEEE, 2014, pp. 6136–6143.
|
| 1127 |
+
[16] R. Oliveira, L. Ott, V. Guizilini, and F. Ramos, “Bayesian optimisation
|
| 1128 |
+
for safe navigation under localisation uncertainty,” in International
|
| 1129 |
+
Symposium of Robotics Research.
|
| 1130 |
+
Springer, 2020, pp. 489–504.
|
| 1131 |
+
[17] G. Francis, L. Ott, R. Marchant, and F. Ramos, “Occupancy map
|
| 1132 |
+
building through bayesian exploration,” The International Journal of
|
| 1133 |
+
Robotics Research, vol. 38, no. 7, pp. 769–792, 2019.
|
| 1134 |
+
[18] S. Bai, J. Wang, K. Doherty, and B. Englot, “Inference-enabled
|
| 1135 |
+
information-theoretic exploration of continuous action spaces,” in
|
| 1136 |
+
International Symposium of Robotics Research.
|
| 1137 |
+
Springer, 2015, pp.
|
| 1138 |
+
419–433.
|
| 1139 |
+
[19] S. Bai, J. Wang, F. Chen, and B. Englot, “Information-theoretic ex-
|
| 1140 |
+
ploration with bayesian optimization,” in 2016 IEEE/RSJ International
|
| 1141 |
+
Conference on Intelligent Robots and Systems (IROS).
|
| 1142 |
+
IEEE, 2016,
|
| 1143 |
+
pp. 1816–1822.
|
| 1144 |
+
[20] S. Bai, F. Chen, and B. Englot, “Toward autonomous mapping and
|
| 1145 |
+
exploration for mobile robots through deep supervised learning,” in
|
| 1146 |
+
2017 IEEE/RSJ International Conference on Intelligent Robots and
|
| 1147 |
+
Systems (IROS).
|
| 1148 |
+
IEEE, 2017, pp. 2379–2384.
|
| 1149 |
+
[21] F. Chen, J. D. Martin, Y. Huang, J. Wang, and B. Englot, “Autonomous
|
| 1150 |
+
exploration under uncertainty via deep reinforcement learning on
|
| 1151 |
+
graphs,” in 2020 IEEE/RSJ International Conference on Intelligent
|
| 1152 |
+
Robots and Systems (IROS).
|
| 1153 |
+
IEEE, 2020, pp. 6140–6147.
|
| 1154 |
+
[22] T. Wang, R. Liao, J. Ba, and S. Fidler, “Nervenet: Learning structured
|
| 1155 |
+
policy with graph neural networks,” in 2018 International Conference
|
| 1156 |
+
on Learning Representations (ICLR), 2018.
|
| 1157 |
+
[23] W. R. Vega-Brown, M. Doniec, and N. G. Roy, “Nonparametric
|
| 1158 |
+
bayesian inference on multivariate exponential families,” Advances in
|
| 1159 |
+
Neural Information Processing Systems, vol. 27, 2014.
|
| 1160 |
+
[24] V. Peretroukhin, W. Vega-Brown, N. Roy, and J. Kelly, “PROBE-GK:
|
| 1161 |
+
Predictive robust estimation using generalized kernels,” in 2016 IEEE
|
| 1162 |
+
International Conference on Robotics and Automation (ICRA), 2016,
|
| 1163 |
+
pp. 817–824.
|
| 1164 |
+
[25] C. Richter, W. Vega-Brown, and N. Roy, “Bayesian learning for safe
|
| 1165 |
+
high-speed navigation in unknown environments,” in International
|
| 1166 |
+
Symposium on Robotics Research.
|
| 1167 |
+
Springer, 2015, pp. 325–341.
|
| 1168 |
+
[26] T. Shan, J. Wang, B. Englot, and K. Doherty, “Bayesian generalized
|
| 1169 |
+
kernel inference for terrain traversability mapping,” in Conference on
|
| 1170 |
+
Robot Learning.
|
| 1171 |
+
PMLR, 2018, pp. 829–838.
|
| 1172 |
+
[27] K. Doherty, T. Shan, J. Wang, and B. Englot, “Learning-aided 3-d
|
| 1173 |
+
occupancy mapping with bayesian generalized kernel inference,” IEEE
|
| 1174 |
+
Transactions on Robotics, vol. 35, no. 4, pp. 953–966, 2019.
|
| 1175 |
+
[28] L. Gan, R. Zhang, J. W. Grizzle, R. M. Eustice, and M. Ghaffari,
|
| 1176 |
+
“Bayesian spatial kernel smoothing for scalable dense semantic map-
|
| 1177 |
+
ping,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 790–
|
| 1178 |
+
797, 2020.
|
| 1179 |
+
[29] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. MIT Press,
|
| 1180 |
+
2005.
|
| 1181 |
+
[30] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for
|
| 1182 |
+
Machine Learning.
|
| 1183 |
+
The MIT Press, 2005.
|
| 1184 |
+
[31] S. T. O’Callaghan and F. T. Ramos, “Gaussian process occupancy
|
| 1185 |
+
maps,” The International Journal of Robotics Research, vol. 31, no. 1,
|
| 1186 |
+
pp. 42–62, 2012.
|
| 1187 |
+
[32] A. Howard and N. Roy, “The robotics data set repository (radish),”
|
| 1188 |
+
2003. [Online]. Available: http://radish.sourceforge.net/
|
| 1189 |
+
|
5NAyT4oBgHgl3EQfpPjF/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
6tE4T4oBgHgl3EQfCAsz/content/tmp_files/2301.04856v1.pdf.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
6tE4T4oBgHgl3EQfCAsz/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
79A0T4oBgHgl3EQfOf-b/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1e4ff6a6f17b8faf51eb35dae9349593157db9a2bec0b0bcfae856566c15c876
|
| 3 |
+
size 3145773
|
7tE1T4oBgHgl3EQfBwI8/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f52fb04a50790f61d400007e7e0b3b1284416965b9bda347ecd5d55fb4360b96
|
| 3 |
+
size 102289
|
7tE2T4oBgHgl3EQfPgY3/content/tmp_files/2301.03759v1.pdf.txt
ADDED
|
@@ -0,0 +1,1407 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: Strain-programmable van der Waals magnetic tunnel junctions
|
| 2 |
+
Authors: John Cenker1, Dmitry Ovchinnikov1, Harvey Yang1, Daniel G. Chica2, Catherine Zhu1,
|
| 3 |
+
Jiaqi Cai1, Geoffrey Diederich1,3, Zhaoyu Liu1, Xiaoyang Zhu2, Xavier Roy2, Ting Cao4,
|
| 4 |
+
Matthew W. Daniels5, Jiun-Haw Chu1, Di Xiao4,1, Xiaodong Xu1,4,*
|
| 5 |
+
1 Department of Physics, University of Washington, Seattle, Washington 98195, USA
|
| 6 |
+
2 Department of Chemistry, Columbia University, New York, NY 10027 USA
|
| 7 |
+
3 Intelligence Community Postdoctoral Research Fellowship Program, University of Washington,
|
| 8 |
+
Seattle, WA, USA
|
| 9 |
+
4 Department of Materials Science and Engineering, University of Washington, Seattle,
|
| 10 |
+
Washington 98195, USA
|
| 11 |
+
5 Physical Measurement Laboratory, National Institute of Standards and Technology,
|
| 12 |
+
Gaithersburg, MD, 20899, USA
|
| 13 |
+
|
| 14 |
+
*Corresponding author’s email: xuxd@uw.edu
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
Abstract: The magnetic tunnel junction (MTJ) is a backbone device for spintronics.
|
| 18 |
+
Realizing next generation energy efficient MTJs will require operating mechanisms beyond
|
| 19 |
+
the standard means of applying magnetic fields or large electrical currents. Here, we
|
| 20 |
+
demonstrate a new concept for programmable MTJ operation via strain control of the
|
| 21 |
+
magnetic states of CrSBr, a layered antiferromagnetic semiconductor used as the tunnel
|
| 22 |
+
barrier. Switching the CrSBr from antiferromagnetic to ferromagnetic order generates a
|
| 23 |
+
giant tunneling magnetoresistance ratio without external magnetic field at temperatures up
|
| 24 |
+
to ≈ 140 K. When the static strain is set near the phase transition, applying small strain
|
| 25 |
+
pulses leads to active flipping of layer magnetization with controlled layer number and thus
|
| 26 |
+
magnetoresistance states. Further, finely adjusting the static strain to a critical value turns
|
| 27 |
+
on stochastic switching between metastable states, with a strain-tunable sigmoidal response
|
| 28 |
+
curve akin to the stochastic binary neuron. Our results highlight the potential of strain-
|
| 29 |
+
programmable van der Waals MTJs towards spintronic applications, such as magnetic
|
| 30 |
+
memory, random number generation, and probabilistic and neuromorphic computing.
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
Main Text:
|
| 39 |
+
|
| 40 |
+
The control and readout of discrete magnetic states lies at the foundation of the fields of
|
| 41 |
+
spintronics and modern information storage1-4. Standard spintronic devices utilize the spin filtering
|
| 42 |
+
phenomenon, where spin-selective transport processes, such as electron tunneling through
|
| 43 |
+
magnetic layers, create spin polarization and magnetoresistance5-10. Controlling the energetics and
|
| 44 |
+
stability of the magnets in such devices, known as magnetic tunnel junctions (MTJ), has enabled
|
| 45 |
+
many important technological advancements. For instance, switching the orientation of the
|
| 46 |
+
magnets from anti-parallel (AP) to parallel (P) in stable MTJs results in large changes to the
|
| 47 |
+
tunneling magnetoresistance (TMR). This behavior is the conceptual basis for magnetic random-
|
| 48 |
+
access memory (MRAM). On the other hand, when the magnetic layers are thinned so that the
|
| 49 |
+
energy difference between P and AP states is small, the magnetic order becomes unstable and
|
| 50 |
+
stochastic switching between the two states is observed11-15. Such stochastic MTJs can serve as
|
| 51 |
+
probabilistic bits (p-bits), the fundamental building blocks for the emerging fields of probabilistic
|
| 52 |
+
and neuromorphic computing11,16. Despite the great successes of conventional MTJs in both
|
| 53 |
+
conventional and probabilistic computing schemes, writing the magnetic memory bits in current
|
| 54 |
+
MRAM schemes tends to rely on energy-intensive means such as the application of large magnetic
|
| 55 |
+
fields or currents17. Moreover, since the stability of the MTJ is fixed by the growth thickness, it is
|
| 56 |
+
difficult to switch from stable MRAM operation to unstable p-bit functionality in the same device.
|
| 57 |
+
|
| 58 |
+
The recent discovery18 of a reversible strain-induced magnetic phase transition in the air
|
| 59 |
+
stable A-type layered antiferromagnetic (AFM) semiconductor CrSBr could offer both a new
|
| 60 |
+
material platform and operating principle for controlling atomically thin MTJs. The A-type AFM
|
| 61 |
+
configuration consists of van der Waals (vdW) layers with intralayer ferromagnetic (FM) order
|
| 62 |
+
and interlayer AFM coupling along the stacking direction, forming intrinsic spin filters that can
|
| 63 |
+
generate exceptionally large TMR19-22. These previous works have demonstrated that applying an
|
| 64 |
+
external magnetic field to A-type antiferromagnets with weak interlayer exchange switches the
|
| 65 |
+
magnetic state from the AFM, high resistance configuration to intermediate states with layer-
|
| 66 |
+
dependent interlayer coupling, and then finally to a low resistance, field-induced FM state (Fig.
|
| 67 |
+
1a). In comparison to the previously studied devices which require continuous application of
|
| 68 |
+
magnetic field to control the magnetic states, strain could provide an exceptionally energy-efficient
|
| 69 |
+
operating mechanism as it requires essentially no current. Moreover, the fine, continuous, and
|
| 70 |
+
reversible tuning of the interlayer exchange could enable unprecedented control of the layer-
|
| 71 |
+
dependent magnetic structure.
|
| 72 |
+
Here, we demonstrate a strain-controlled vdW MTJ with programmable magneto-
|
| 73 |
+
resistance states and stochastic switching, charting a path towards new memory and computing
|
| 74 |
+
technologies. The schematic for our strain device is shown in Figure 1b. The vdW MTJ
|
| 75 |
+
heterostructure is composed of a CrSBr tunnel barrier sandwiched between two narrow graphite
|
| 76 |
+
contacts. The whole MTJ is fixed to a stretchable polyimide substrate by a gold clamp with a small
|
| 77 |
+
(≈ 5 µm) window around the junction (Methods). This design ensures a highly efficient strain
|
| 78 |
+
transfer when the polyimide substrate is stretched by a home-built piezoelectric strain cell18,23,
|
| 79 |
+
while also allowing for optical spectroscopy measurements of the junction region. The strain is
|
| 80 |
+
applied along the crystallographic a axis for consistency with previous experiments18. The data in
|
| 81 |
+
the main text is taken on a MTJ with an ≈ 11 nm tunnel barrier, but the technique is compatible
|
| 82 |
+
with CrSBr flakes of any thickness.
|
| 83 |
+
|
| 84 |
+
Figure 1c shows the tunneling magnetoresistance as a function of magnetic field (µ0H)
|
| 85 |
+
applied along the c axis. In the low strain condition with piezo voltage (Vp) of -5 V, CrSBr is in
|
| 86 |
+
the AFM state at µ0H = 0 T. As |µ0H| increases, the spins cant from the AFM configuration,
|
| 87 |
+
gradually increasing the conductivity of the MTJ until it reaches the field-induced FM state with
|
| 88 |
+
|µ0H| > 1 T. This behavior is consistent with the in-plane A-type layered AFM order in CrSBr24.
|
| 89 |
+
We note that the saturating field is lower than standard exfoliated CrSBr samples due to a built-in
|
| 90 |
+
strain which we determine to be ≈ 0.9 % from the Raman spectra (Methods, and Extended Data
|
| 91 |
+
Fig. 1). Using the difference in resistance between the FM (Rp) and AFM (Rap) states, we find the
|
| 92 |
+
tunneling magnetoresistance ratio to be TMR (%) =
|
| 93 |
+
������
|
| 94 |
+
��
|
| 95 |
+
≈ 3100 %, on par with other 2D A-
|
| 96 |
+
type AFM tunnel junctions19-21, albeit at much higher operating temperature.
|
| 97 |
+
Strain Switching MTJ
|
| 98 |
+
When the piezo voltage is increased, the TMR decreases dramatically (Fig. 2a).
|
| 99 |
+
Furthermore, the shape of the tunneling magnetoresistance curves evolves from a giant, purely
|
| 100 |
+
negative magnetoresistance (i.e., decreasing resistance with increasing field) at low strain to small
|
| 101 |
+
positive MR at high strain (Extended Data Fig. 2), with complex, hysteretic behavior in between,
|
| 102 |
+
e.g., the curve at 5 V in Fig. 2A. The large decrease in TMR and switching from negative to
|
| 103 |
+
positive magnetoresistance implies that the interlayer magnetic coupling is switched from AFM to
|
| 104 |
+
FM at large strain. This picture is confirmed by comparison of the strain-dependent
|
| 105 |
+
photoluminescence (PL) with the magnetoresistance. The PL shows the characteristic red-shift
|
| 106 |
+
from the strain induced AFM to FM phase transition, as demonstrated in a previous report18, which
|
| 107 |
+
is concurrent with the large changes in tunneling magnetoresistance (Figs. 2b-c). The close
|
| 108 |
+
correspondence between the magneto-PL and tunneling magnetoresistance is a consequence of the
|
| 109 |
+
coupling of spin and charge in magnetic semiconductors, which forbids or allows interlayer
|
| 110 |
+
electronic hybridization and tunneling in the AFM and FM states, respectively.
|
| 111 |
+
In the low-strain state, the A-type AFM structure creates tunnel barriers composed of spin
|
| 112 |
+
filters with alternating spin orientation. In the FM state, however, the tunnel barrier is uniform for
|
| 113 |
+
all layers, i.e., all spin filters are aligned in the same direction. As a result, applying a saturating
|
| 114 |
+
magnetic field at low strains strongly enhances the tunneling current with respect to the AFM state
|
| 115 |
+
(top panel, Fig. 2d). At high strains, however, there is little difference between the zero- and high
|
| 116 |
+
magnetic field tunneling behavior, as expected for a FM tunnel barrier25 (Fig. 2d, bottom). The
|
| 117 |
+
combination of optical and tunneling measurements unambiguously prove that the strain-induced
|
| 118 |
+
AFM to FM phase transition is the cause of the large tunneling magnetoresistance switching,
|
| 119 |
+
excluding trivial origins such as contact failure during the straining process.
|
| 120 |
+
We realized strain switching of the MTJ at zero magnetic field. Figure 3a shows the
|
| 121 |
+
tunneling resistance as the piezo voltage is continually increased. At around 5 V, the sample
|
| 122 |
+
experiences a switch from AFM to FM states accompanied by a sharp drop in resistance. This
|
| 123 |
+
strain-induced phase transition generates a TMR ratio of ≈ 2700 %, comparable to the field-
|
| 124 |
+
induced TMR in the AFM state. When the tension is released, the resistance recovers to its original
|
| 125 |
+
value. The observed hysteresis between up and down strain sweeps is likely due to a combination
|
| 126 |
+
of the piezo stack hysteresis and hysteresis in the first-order magnetic phase transition itself. This
|
| 127 |
+
switching operation is robust over many cycles, with no obvious slipping or degradation over the
|
| 128 |
+
entire measurement (> 50 strain sweeps).
|
| 129 |
+
The strain-switching operation of the MTJ persists to much higher temperature than other
|
| 130 |
+
2D MTJs19-22,25-27. Figure 3b shows tunneling magnetoresistance vs strain cycles at select
|
| 131 |
+
|
| 132 |
+
temperatures. At higher temperatures, the transition between low and high tunneling
|
| 133 |
+
magnetoresistance states becomes broader, but a large strain switching ratio is maintained. As
|
| 134 |
+
shown in Fig. 3c, the zero-field strain-induced TMR exceeds 10,000 % at 30 K and remains above
|
| 135 |
+
100 % up to ≈ 140 K. Interestingly, a dome of positive magnetoresistance as a function of field
|
| 136 |
+
can still be induced by a large strain at 155 K, well above the Neel temperature of 132 K reported
|
| 137 |
+
in previous studies24,28,29 (Fig. 3d). A likely explanation is that the enhancement of the interlayer
|
| 138 |
+
FM exchange induces a long-range ordering of the previously reported intermediate FM (iFM)
|
| 139 |
+
phase where the individual layers are ferromagnetically ordered, but the interlayer coupling
|
| 140 |
+
remains paramagnetic29.
|
| 141 |
+
Strain programmable layer-dependent magnetism
|
| 142 |
+
An intriguing feature of the strain-dependent TMR sweeps is that there are multiple
|
| 143 |
+
resistance jumps during the AFM-FM phase transition, indicating the formation of multiple
|
| 144 |
+
magnetic domains in the junction area of about 500 x 500 nm2. These domains are also evident
|
| 145 |
+
from the complex, hysteretic behavior observed in the field dependent TMR measurements (Fig.
|
| 146 |
+
2a, 5 V). Similar magnetic domain behavior is observed in both the nanoscale junction region and
|
| 147 |
+
across several microns of the sample in magneto-PL (Extended Data Fig. 3). These results suggest
|
| 148 |
+
the formation of vertical instead of lateral magnetic domains during the phase transition. The
|
| 149 |
+
domains may arise from small vertical strain gradients. Thus, near the critical strain of the magnetic
|
| 150 |
+
phase transition, the interlayer coupling can be FM for some layers and AFM for others. These
|
| 151 |
+
layer-wise magnetic domains could serve as individual magnetic memory states which can be
|
| 152 |
+
precisely manipulated by strain.
|
| 153 |
+
To explore active control of layer magnetization flipping, we set the static strain near the
|
| 154 |
+
phase transition and then apply strain pulses with a small and controllable amplitude VPAC (see
|
| 155 |
+
Figure 4a inset). Figure 4a shows the tunneling current over time as VPAC is increased from 5 mV
|
| 156 |
+
to 0.25 V. As the pulse reaches an amplitude of ≈ 24 mV, corresponding to a strain of only ≈
|
| 157 |
+
0.0008 %, the amplitude of tunneling current pulses jumps into a distinctly stable state (left-most
|
| 158 |
+
purple arrow in Fig. 4a). This indicates the MTJ switches between two magnetization states with
|
| 159 |
+
the strain pulse actively flipping the magnetization direction of individual layers. Calculating the
|
| 160 |
+
gauge factor, GF =
|
| 161 |
+
∆�
|
| 162 |
+
�
|
| 163 |
+
� , gives an exceptionally large value of ≈ 3500, among the largest value
|
| 164 |
+
reported in any system30,31.
|
| 165 |
+
By increasing the magnitude of the strain pulse, the number of layers whose magnetization
|
| 166 |
+
can be flipped also increases. This is evidenced by the additional distinct jumps in tunneling current
|
| 167 |
+
with increasing pulse amplitude (purple arrows in Fig. 4a). With a large enough strain pulse, the
|
| 168 |
+
static state current abruptly increases, indicating a change in the static magnetic configuration.
|
| 169 |
+
This behavior is completely different than what is observed in the purely FM or purely AFM states,
|
| 170 |
+
where increasing strain pulse magnitude only produces small, continuous changes at a gauge factor
|
| 171 |
+
three orders of magnitude smaller, and with no change in the static current (Extended Data Fig. 4).
|
| 172 |
+
Therefore, we conclude that the strain pulse switching observed in Fig. 4a arises from changing
|
| 173 |
+
the vertical domain structure of the mixed magnetic states. These results demonstrate that multiple
|
| 174 |
+
individual magnetic domains, including the static magnetic state, can be controlled by applying
|
| 175 |
+
extremely small strain pulses.
|
| 176 |
+
Stochastic domain switching
|
| 177 |
+
|
| 178 |
+
|
| 179 |
+
The demonstrated ability to switch the layer-dependent magnetization suggests that strain
|
| 180 |
+
can tune the MTJ into a regime where the AFM and FM interlayer couplings are extremely close
|
| 181 |
+
in energy. Starting from a stable magnetic domain structure, we increase the static strain, VPDC, by
|
| 182 |
+
14 mV, as indicated by the red arrow in top panel of Fig. 4b. In such a condition, the tunneling
|
| 183 |
+
current proceeds to fluctuate between two values (Fig. 4b, bottom). By decreasing the piezo
|
| 184 |
+
voltage back to the original value (blue arrow in Fig. 4b, top), the tunneling current returns to a
|
| 185 |
+
stable value. The current fluctuations can be reliably turned on and off, as demonstrated. To our
|
| 186 |
+
knowledge, this is the first realization of p-bit type operation using a vdW MTJ. This functionality
|
| 187 |
+
is enabled by the unique ability of strain to finely and continuously tune the energy barrier between
|
| 188 |
+
parallel and anti-parallel spin configurations, enabling in-situ switching from stable, MRAM type
|
| 189 |
+
to stochastic, p-bit type domains (Fig. 4c).
|
| 190 |
+
By defining the lower current state as a 0 and the higher current state as a 1, we can convert
|
| 191 |
+
the data to a binary sequence and analyze how the statistics of the domain switching respond to
|
| 192 |
+
external control knobs, i.e. applied bias voltage and strain. We find that increasing the bias voltage
|
| 193 |
+
applied to the tunnel junction leads to a large increase in the switching rate (Fig. 4d). Intriguingly,
|
| 194 |
+
no switching is observed when a current of similar magnitude flows in the opposite direction
|
| 195 |
+
(Extended Data Fig. 5). This bias-polarity dependence implies that heating is not the origin of the
|
| 196 |
+
increased switching rate. Instead, the data suggests that the sample has an asymmetric vertical
|
| 197 |
+
magnetic domain structure which creates a difference in spin polarization and thus spin transfer
|
| 198 |
+
torque effects when the current is passed in opposite directions12 (Extended Data Fig. 5). Whether
|
| 199 |
+
such an asymmetric domain structure can give rise to exchange bias32, magnetic ratchet effect33,
|
| 200 |
+
and other spintronics physics within a single crystal is a fascinating direction for future studies.
|
| 201 |
+
|
| 202 |
+
The relatively high Neel temperature (TN =132 K) of CrSBr in comparison to other 2D A-
|
| 203 |
+
type AFMs creates opportunities for potential device applications operating above liquid nitrogen
|
| 204 |
+
temperature. Figure 4e shows the response function (ρ) of the MTJ as a function of the static piezo
|
| 205 |
+
voltage with a starting value near the strain-induced phase transition at 85 K. The response function
|
| 206 |
+
is calculated by converting the MTJ output to a binary sequence and calculating the average over
|
| 207 |
+
the entire time window. Therefore, a response function value of 0 or 1 indicates a stable magnetic
|
| 208 |
+
domain, while a value of 0.5 indicates equal fluctuations between the two stable states. The ability
|
| 209 |
+
to finely tune the response function should enable both random number generation at ρ = 0.5 and
|
| 210 |
+
a biased Bernoulli sequence at higher or lower values, which can be important for applications
|
| 211 |
+
dealing with Ising and probabilistic computing12. We further note that the applied bias voltage may
|
| 212 |
+
also be used to tune the response function by increasing or decreasing the switching rate,
|
| 213 |
+
potentially providing fine control near the edges of the sigmoidal curve, while also enabling
|
| 214 |
+
interaction between multiple p-bits. In principle, the two independent control parameters (strain
|
| 215 |
+
and bias voltage) could also offer independent tuning of the effective temperature and energy
|
| 216 |
+
landscape of the p-bit, thereby allowing direct stochastic annealing of a p-bit system. Such a
|
| 217 |
+
scheme could significantly reduce the circuit complexity required to realize a large-scale analog
|
| 218 |
+
p-bit annealer, though additional study is needed to establish the full mapping between our two-
|
| 219 |
+
dimensional voltage landscape and the statistical mechanical state space of the p-bit dynamics.
|
| 220 |
+
To test the stochasticity of our device, we analyze the switching data taken when ρ ≈ 0.5,
|
| 221 |
+
generating a binary sequence with near equal 1s and 0s, as shown in Figs. 4f-g. Since the lock-in
|
| 222 |
+
detection scheme reads the current much faster than the domain switching rate, we sample the raw
|
| 223 |
+
data at a frequency which is slower than the calculated switching rate to prevent non-random runs
|
| 224 |
+
of 1s and 0s (see discussion in Supplementary Information). We tested the data using the NIST
|
| 225 |
+
|
| 226 |
+
test suite (Fig. 4g) and by analyzing the rise and dwell time of the switching events, which shows
|
| 227 |
+
that the device spends equal amounts of time in the 0 and 1 state within the experimental error
|
| 228 |
+
(Supplementary Information). These analyses combined with their physical origin strongly
|
| 229 |
+
suggests that the metastable states switch stochastically, thereby acting as a random number
|
| 230 |
+
generator.
|
| 231 |
+
|
| 232 |
+
In conclusion, we have demonstrated that strained single crystal CrSBr offers a powerful
|
| 233 |
+
platform for realizing zero-field programmable spintronic devices down to the atomically thin limit
|
| 234 |
+
(Extended Data Fig. 6). Due to the versatile nature of vdW heterostructures, our results create a
|
| 235 |
+
new path for various other programmable 2D quantum devices. For instance, replacing the graphite
|
| 236 |
+
contacts with superconducting ones could enable field-free control of magnetic Josephson
|
| 237 |
+
junctions34-37 and superconducting diode effects38-40. Moreover, the ability to switch the layer-
|
| 238 |
+
dependent magnetization and vertical magnetic domain structure creates unprecedented
|
| 239 |
+
opportunities to precisely vary the length of the FM and AFM tunnel barriers in-situ without
|
| 240 |
+
significantly changing the overall thickness of the insulating CrSBr barrier layer. This capability
|
| 241 |
+
could provide a new platform for exploring exotic phenomena that have been proposed in
|
| 242 |
+
superconductor/ferromagnetic junctions with inhomogeneous magnetization such as spin triplet
|
| 243 |
+
correlations. More generally, our clamping and strain technique greatly expands the accessible
|
| 244 |
+
strain range for cryogenic transport experiments on 2D devices, which could enable exciting
|
| 245 |
+
discoveries on the emergent quantum phenomena in vdW heterostructures including moiré systems.
|
| 246 |
+
Methods
|
| 247 |
+
Device fabrication and strain application
|
| 248 |
+
To prepare the strain substrate, we first cut transparent 20 µm thick polyimide into strips and
|
| 249 |
+
epoxied them onto 2D flexure sample plates produced by Razorbill instruments† using Stycast
|
| 250 |
+
2850 FT epoxy. The distance between the edge of the epoxy on either side of the gap was less than
|
| 251 |
+
200 µm to enable large strains.
|
| 252 |
+
Bulk CrSBr crystals were grown by the same method detailed previously28. The bulk CrSBr and
|
| 253 |
+
graphite crystals were exfoliated onto PDMS substrates using standard methods and thin (~ 10 nm)
|
| 254 |
+
flakes were identified by optical contrast. The MTJs were then assembled through a dry transfer
|
| 255 |
+
technique with a stamp consisting of a polypropylene carbonate (PPC) film spin coated onto a
|
| 256 |
+
polydimethylsiloxane (PDMS) cylinder. The flakes were picked up in the following order before
|
| 257 |
+
being deposited onto the polyimide substrate: top graphite, CrSBr, bottom graphite. The long axis
|
| 258 |
+
of the CrSBr flake was aligned with the strain axis for consistency with the previous studies18.
|
| 259 |
+
After depositing the MTJ heterostructure, the window clamping pattern and electrical contacts to
|
| 260 |
+
the two graphite contacts were fabricated using standard electron beam lithography techniques
|
| 261 |
+
with a metal thickness of 7 and 70 nm Cr and Au, respectively. Then, the sample plate was screwed
|
| 262 |
+
into the same symmetric three-piezo strain cell used previously18,23 for strain experiments on bulk
|
| 263 |
+
crystals and our previous experiments on strained CrSBr.
|
| 264 |
+
To calibrate the strain during the experiment, we used the same Raman shift rate of the mode near
|
| 265 |
+
~ 346 cm-1 that we determined in the previous study2. We found that there was a rather large built-
|
| 266 |
+
in strain of ~ 0.9 %, which is consistent with the small saturating field in the out-of-plane direction.
|
| 267 |
+
|
| 268 |
+
† Certain commercial processes and software are identified in this article to foster understanding. Such identification
|
| 269 |
+
does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it
|
| 270 |
+
imply that the processes and software identified are necessarily the best available for the purpose.
|
| 271 |
+
|
| 272 |
+
The observation that the strain-induced phase transition occurs at negative piezo voltages at lower
|
| 273 |
+
temperature is consistent with a thermally induced built-in strain which increases with cooling.
|
| 274 |
+
Optical measurements:
|
| 275 |
+
Optical measurements were performed using a backscattering geometry in a closed-cycle helium
|
| 276 |
+
cryostat (Opticool by Quantum Design) with a nominal sample temperature of 60 K. An objective
|
| 277 |
+
lens focused 632.8 nm light from a He/Ne laser to a spot size of ~ 1 µm. For Raman measurements,
|
| 278 |
+
a laser power of 200 µW was used and the collected signal was dispersed using a 1800 mm-1
|
| 279 |
+
groove-density grating and detected by an LN-cooled charge-coupled device (CCD) with an
|
| 280 |
+
integration time of 210 seconds. BragGrateTM notch filters were used to filter out Rayleigh
|
| 281 |
+
scattering down to ~10 cm-1. A roughly linear background originating from weak polyimide
|
| 282 |
+
photoluminescence was subtracted to increase the accuracy of the fitting results. For
|
| 283 |
+
photoluminescence measurements, we used a laser power of 50 µW focused by the same objective.
|
| 284 |
+
The collected light was dispersed by a 600 mm-1 groove-density grating and detected by the same
|
| 285 |
+
CCD with a 20 second integration time.
|
| 286 |
+
Transport measurements:
|
| 287 |
+
Except for the data presented in Extended Data Fig. 6, the transport measurements were performed
|
| 288 |
+
in the same measurement conditions (Opticool by Quantum Design) as the optical ones, enabling
|
| 289 |
+
direct comparison between the observed phenomena. The data shown in Figures 1-3 and 4b are
|
| 290 |
+
taken using standard two terminal DC measurements with a Keithley 2450, while the rest of the
|
| 291 |
+
data in Figure 4 are taken using AC detection with a DC offset voltage applied by a Zurich
|
| 292 |
+
Instruments HF2 lock-in amplifier. The current was amplified by a current preamplifier (DL
|
| 293 |
+
Instruments; Model 1211) with a sensitivity of 1 V/10−6 A. For the switching data used in Fig. 4E-
|
| 294 |
+
F and the stochasticity analysis, a time constant of 5.082 ms with a fourth-order filter was used,
|
| 295 |
+
which was found to give the best time resolution while maintaining a high signal to noise ratio.
|
| 296 |
+
The current was amplified by a current preamplifier (DL Instruments; Model 1211) with a
|
| 297 |
+
sensitivity of 1 V/10−6 A.
|
| 298 |
+
The 6L device in Extended Data Fig. 6 was measured in a PPMS DynaCool cryostat by Quantum
|
| 299 |
+
Design. The data in Fig. S6a-c were taken using the same AC detection scheme, but with an SR860
|
| 300 |
+
lock-in amplifier. The switching data in Fig. S6d-e were obtained using a constant current
|
| 301 |
+
measurement scheme, which was achieved by putting a 100 MΩ resistor in series with the device.
|
| 302 |
+
The resistance signal was then pre-amplified by the differential-ended mode of SR560 with 20
|
| 303 |
+
times amplification.
|
| 304 |
+
References:
|
| 305 |
+
1
|
| 306 |
+
Baibich, M. N. et al. Giant Magnetoresistance of (001)Fe/(001)Cr Magnetic
|
| 307 |
+
Superlattices. Physical Review Letters 61, 2472-2475 (1988).
|
| 308 |
+
2
|
| 309 |
+
Binasch, G., Grünberg, P., Saurenbach, F. & Zinn, W. Enhanced magnetoresistance in
|
| 310 |
+
layered magnetic structures with antiferromagnetic interlayer exchange. Physical Review
|
| 311 |
+
B 39, 4828-4830 (1989).
|
| 312 |
+
3
|
| 313 |
+
Žutić, I., Fabian, J. & Das Sarma, S. Spintronics: Fundamentals and applications. Reviews
|
| 314 |
+
of Modern Physics 76, 323-410 (2004).
|
| 315 |
+
4
|
| 316 |
+
Dieny, B. et al. Giant magnetoresistive in soft ferromagnetic multilayers. Physical
|
| 317 |
+
Review B 43, 1297-1300 (1991).
|
| 318 |
+
|
| 319 |
+
5
|
| 320 |
+
Julliere, M. Tunneling between ferromagnetic films. Physics letters. 54, 225 (1975).
|
| 321 |
+
6
|
| 322 |
+
Moodera, J. S., Kinder, L. R., Wong, T. M. & Meservey, R. Large Magnetoresistance at
|
| 323 |
+
Room Temperature in Ferromagnetic Thin Film Tunnel Junctions. Physical Review
|
| 324 |
+
Letters 74, 3273-3276 (1995).
|
| 325 |
+
7
|
| 326 |
+
Miyazaki, T., Tezuka, N. Giant magnetic tunneling effect in Fe/Al2O3/Fe junction.
|
| 327 |
+
Journal of magnetism and magnetic materials 139, L231 (1995).
|
| 328 |
+
8
|
| 329 |
+
Yuasa, S., Nagahama, T., Fukushima, A., Suzuki, Y. & Ando, K. Giant room-
|
| 330 |
+
temperature magnetoresistance in single-crystal Fe/MgO/Fe magnetic tunnel junctions.
|
| 331 |
+
Nature Materials 3, 868-871 (2004).
|
| 332 |
+
9
|
| 333 |
+
Parkin, S. S. P. et al. Giant tunnelling magnetoresistance at room temperature with MgO
|
| 334 |
+
(100) tunnel barriers. Nature Materials 3, 862-867 (2004).
|
| 335 |
+
10
|
| 336 |
+
Ikeda, S. et al. Tunnel magnetoresistance of 604% at 300K by suppression of Ta
|
| 337 |
+
diffusion in CoFeB∕MgO∕CoFeB pseudo-spin-valves annealed at high temperature.
|
| 338 |
+
Applied Physics Letters 93, 082508 (2008).
|
| 339 |
+
11
|
| 340 |
+
Borders, W. A. et al. Integer factorization using stochastic magnetic tunnel junctions.
|
| 341 |
+
Nature 573, 390-393 (2019).
|
| 342 |
+
12
|
| 343 |
+
Safranski, C. et al. Demonstration of Nanosecond Operation in Stochastic Magnetic
|
| 344 |
+
Tunnel Junctions. Nano Letters 21, 2040-2045 (2021).
|
| 345 |
+
13
|
| 346 |
+
Bapna, M. et al. Magnetostatic effects on switching in small magnetic tunnel junctions.
|
| 347 |
+
Applied Physics Letters 108, 022406 (2016).
|
| 348 |
+
14
|
| 349 |
+
Mizrahi, A. et al. Neural-like computing with populations of superparamagnetic basis
|
| 350 |
+
functions. Nature Communications 9 (2018).
|
| 351 |
+
15
|
| 352 |
+
Rippard, W., Heindl, R., Pufall, M., Russek, S. & Kos, A. Thermal relaxation rates of
|
| 353 |
+
magnetic nanoparticles in the presence of magnetic fields and spin-transfer effects.
|
| 354 |
+
Physical Review B 84 (2011).
|
| 355 |
+
16
|
| 356 |
+
Camsari, K. Y., Sutton, B. M. & Datta, S. p-bits for probabilistic spin logic. Applied
|
| 357 |
+
Physics Reviews 6, 011305 (2019).
|
| 358 |
+
17
|
| 359 |
+
Bhatti, S. et al. Spintronics based random access memory: a review. Materials Today 20,
|
| 360 |
+
530-548 (2017).
|
| 361 |
+
18
|
| 362 |
+
Cenker, J. et al. Reversible strain-induced magnetic phase transition in a van der Waals
|
| 363 |
+
magnet. Nature Nanotechnology 17, 256-261 (2022).
|
| 364 |
+
19
|
| 365 |
+
Song, T. et al. Giant tunneling magnetoresistance in spin-filter van der Waals
|
| 366 |
+
heterostructures. Science 360, 1214-1218 (2018).
|
| 367 |
+
20
|
| 368 |
+
Wang, Z. et al. Very large tunneling magnetoresistance in layered magnetic
|
| 369 |
+
semiconductor CrI3. Nature Communications 9 (2018).
|
| 370 |
+
21
|
| 371 |
+
Kim, H. H. et al. One Million Percent Tunnel Magnetoresistance in a Magnetic van der
|
| 372 |
+
Waals Heterostructure. Nano Letters 18, 4885-4890 (2018).
|
| 373 |
+
22
|
| 374 |
+
Klein, D. R. et al. Probing magnetism in 2D van der Waals crystalline insulators via
|
| 375 |
+
electron tunneling. Science 360, 1218-1222 (2018).
|
| 376 |
+
23
|
| 377 |
+
Hicks, C. W., Barber, M. E., Edkins, S. D., Brodsky, D. O. & Mackenzie, A. P.
|
| 378 |
+
Piezoelectric-based apparatus for strain tuning. Review of Scientific Instruments 85,
|
| 379 |
+
065003 (2014).
|
| 380 |
+
24
|
| 381 |
+
Telford, E. J. et al. Layered Antiferromagnetism Induces Large Negative
|
| 382 |
+
Magnetoresistance in the van der Waals Semiconductor CrSBr. Advanced Materials 32,
|
| 383 |
+
2003240 (2020).
|
| 384 |
+
|
| 385 |
+
25
|
| 386 |
+
Wang, Z. et al. Magnetization dependent tunneling conductance of ferromagnetic
|
| 387 |
+
barriers. Nature Communications 12 (2021).
|
| 388 |
+
26
|
| 389 |
+
Cai, X. et al. Atomically Thin CrCl3: An In-Plane Layered Antiferromagnetic Insulator.
|
| 390 |
+
Nano Letters 19, 3993-3998 (2019).
|
| 391 |
+
27
|
| 392 |
+
Wang, Z. et al. Determining the phase diagram of atomically thin layered
|
| 393 |
+
antiferromagnet CrCl3. Nature Nanotechnology 14, 1116-1122 (2019).
|
| 394 |
+
28
|
| 395 |
+
Scheie, A. et al. Spin Waves and Magnetic Exchange Hamiltonian in CrSBr. Advanced
|
| 396 |
+
Science 9, 2202467 (2022).
|
| 397 |
+
29
|
| 398 |
+
Lee, K. et al. Magnetic Order and Symmetry in the 2D Semiconductor CrSBr. (2020).
|
| 399 |
+
30
|
| 400 |
+
Wu, J. M. et al. Ultrahigh Sensitive Piezotronic Strain Sensors Based on a ZnSnO3
|
| 401 |
+
Nanowire/Microwire. ACS Nano 6, 4369-4374 (2012).
|
| 402 |
+
31
|
| 403 |
+
Yan, W. et al. Giant gauge factor of Van der Waals material based strain sensors. Nature
|
| 404 |
+
Communications 12 (2021).
|
| 405 |
+
32
|
| 406 |
+
Meiklejohn, W. H. & Bean, C. P. New Magnetic Anisotropy. Physical Review 102, 1413-
|
| 407 |
+
1414 (1956).
|
| 408 |
+
33
|
| 409 |
+
Lavrijsen, R. et al. Magnetic ratchet for three-dimensional spintronic memory and logic.
|
| 410 |
+
Nature 493, 647-650 (2013).
|
| 411 |
+
34
|
| 412 |
+
Gingrich, E. C. et al. Controllable 0–π Josephson junctions containing a ferromagnetic
|
| 413 |
+
spin valve. Nature Physics 12, 564-567 (2016).
|
| 414 |
+
35
|
| 415 |
+
Ai, L. et al. Van der Waals ferromagnetic Josephson junctions. Nature Communications
|
| 416 |
+
12 (2021).
|
| 417 |
+
36
|
| 418 |
+
Idzuchi, H. et al. Unconventional supercurrent phase in Ising superconductor Josephson
|
| 419 |
+
junction with atomically thin magnetic insulator. Nature Communications 12 (2021).
|
| 420 |
+
37
|
| 421 |
+
Kang, K. et al. van der Waals π Josephson Junctions. Nano Letters (2022).
|
| 422 |
+
38
|
| 423 |
+
Narita, H. et al. Field-free superconducting diode effect in noncentrosymmetric
|
| 424 |
+
superconductor/ferromagnet multilayers. Nature Nanotechnology (2022).
|
| 425 |
+
39
|
| 426 |
+
Ando, F. et al. Observation of superconducting diode effect. Nature 584, 373-376 (2020).
|
| 427 |
+
40
|
| 428 |
+
Wu, H. et al. The field-free Josephson diode in a van der Waals heterostructure. Nature
|
| 429 |
+
604, 653-656 (2022).
|
| 430 |
+
|
| 431 |
+
Acknowledgements: We thank Xuetao Ma and Yen-Cheng Kung for fabrication advice, G.C.
|
| 432 |
+
Adam, W.A. Borders, and J. J. Mcclelland for proofreading the paper, and John Stroud and
|
| 433 |
+
Heonjoon Park for their help during the initial stages of the project. The strain controlled optical
|
| 434 |
+
measurement is mainly supported by DE-SC0018171. The strain-controlled tunneling experiment
|
| 435 |
+
is mainly supported by Air Force Office of Scientific Research (AFOSR) Multidisciplinary
|
| 436 |
+
University Research Initiative (MURI) program, grant no. FA9550- 19-1-0390. CrSBr crystal
|
| 437 |
+
synthesis is supported by the Center on Programmable Quantum Materials, an Energy Frontier
|
| 438 |
+
Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Basic Energy
|
| 439 |
+
Sciences (BES), under award DE-SC0019443. DGC is supported by the Columbia MRSEC on
|
| 440 |
+
Precision-Assembled Quantum Materials (PAQM) (DMR-2011738). XX acknowledges support
|
| 441 |
+
from the State of Washington funded Clean Energy Institute and from the Boeing Distinguished
|
| 442 |
+
Professorship in Physics. JC acknowledges the Graduate Fellowship from Clean Energy Institute
|
| 443 |
+
funded by the State of Washington. ZL and JHC acknowledge the support of the David and Lucile
|
| 444 |
+
Packard Foundation. This research was supported by an appointment to the Intelligence
|
| 445 |
+
Community Postdoctoral Research Fellowship Program at University of Washington,
|
| 446 |
+
|
| 447 |
+
administered by Oak Ridge Institute for Science and Education through an interagency agreement
|
| 448 |
+
between the U.S. Department of Energy and the Office of the Director of National Intelligence.
|
| 449 |
+
|
| 450 |
+
Author contributions: XX and John C conceived the project. John C performed the optical and
|
| 451 |
+
transport measurements with help from Jiaqi C and GD. DO supervised transport measurements
|
| 452 |
+
and contributed to fabrication development. John C fabricated the samples with assistance from
|
| 453 |
+
HY and ZL. John C, DO, TC, JHC, DX, and XX analyzed the data and interpreted the results. TC,
|
| 454 |
+
MWD and DX provided theoretical support. DGC grew the CrSBr crystals with supervision from
|
| 455 |
+
XR and XYZ. John C and XX wrote the manuscript with input from all authors. All authors
|
| 456 |
+
discussed the results.
|
| 457 |
+
Competing interests: John C and XX have applied for a patent based on this work.
|
| 458 |
+
Data availability: The datasets generated during and/or analyzed during this study are available
|
| 459 |
+
from the corresponding author upon reasonable request.
|
| 460 |
+
|
| 461 |
+
|
| 462 |
+
Figures:
|
| 463 |
+
|
| 464 |
+
|
| 465 |
+
Figure 1 | Straintronic van der Waals magnetic tunnel junction. a, Schematic of the
|
| 466 |
+
magnetic state evolution of the CrSBr tunnel barrier with the application of either magnetic
|
| 467 |
+
fields along the easy b axis or in-plane uniaxial strain. The changing magnetic configuration
|
| 468 |
+
creates different resistance states when bias is applied between the graphite contacts (grey).
|
| 469 |
+
The red and blue arrows denote the spin direction within each layer. b, Schematic of
|
| 470 |
+
straintronic MTJ consisting of graphite contacts sandwiching a CrSBr tunnel barrier (blue).
|
| 471 |
+
The whole device is fixed by gold clamps to a flexible polyimide substrate (purple) which is
|
| 472 |
+
then strained. c, Magnetic field dependence of a MTJ using an ≈ 11 nm CrSBr tunnel barrier
|
| 473 |
+
(optical image inset, scale bar 3 µm) at a temperature of 60K. The device is slightly strained
|
| 474 |
+
but remains in the AFM state at zero magnetic field. Magnetic field is applied along the hard c
|
| 475 |
+
axis, leading to spin canting (inset arrows).
|
| 476 |
+
|
| 477 |
+
|
| 478 |
+
Resistance
|
| 479 |
+
.H
|
| 480 |
+
YYY
|
| 481 |
+
KKK
|
| 482 |
+
0.4
|
| 483 |
+
0.2
|
| 484 |
+
0
|
| 485 |
+
-2
|
| 486 |
+
-1
|
| 487 |
+
0
|
| 488 |
+
1
|
| 489 |
+
2
|
| 490 |
+
Low resistance
|
| 491 |
+
μ.H (T)V
|
| 492 |
+
= 0.5 V
|
| 493 |
+
NN
|
| 494 |
+
Bias
|
| 495 |
+
T = 60 K
|
| 496 |
+
0.8
|
| 497 |
+
V. = -5 V
|
| 498 |
+
C
|
| 499 |
+
pa
|
| 500 |
+
b
|
| 501 |
+
High resistance
|
| 502 |
+
Top Gr
|
| 503 |
+
CrSBr
|
| 504 |
+
Bot. Gr
|
| 505 |
+
|
| 506 |
+
|
| 507 |
+
Figure 2 | Strain switchable magnetic tunnel junctions. a, Magnetoresistance sweeps at
|
| 508 |
+
select piezo voltages with a fixed bias voltage across the MTJ of VB = 0.5 V. The sweeps are
|
| 509 |
+
offset for clarity. b, Full strain-dependent tunneling magnetoresistance with the magnetic field
|
| 510 |
+
swept from positive to negative. c, Strain dependent photoluminescence intensity plot. The
|
| 511 |
+
beam spot was kept fixed on the junction region while the strain was continuously swept. d-e,
|
| 512 |
+
Bias dependent tunneling current with magnetic fields of 0 T (blue) and 3 T (green) applied in
|
| 513 |
+
the low strain (d) and high strain (e) states. The magnetic state for each curve is depicted in the
|
| 514 |
+
inset. All measurements were performed at a temperature of 60 K.
|
| 515 |
+
|
| 516 |
+
|
| 517 |
+
Resista
|
| 518 |
+
1.38
|
| 519 |
+
-5 V
|
| 520 |
+
(eV)
|
| 521 |
+
1
|
| 522 |
+
Energy (
|
| 523 |
+
(nA)
|
| 524 |
+
OV
|
| 525 |
+
1.34
|
| 526 |
+
0
|
| 527 |
+
5V
|
| 528 |
+
0.2
|
| 529 |
+
Photon
|
| 530 |
+
15 V
|
| 531 |
+
PL (a.u.)
|
| 532 |
+
-1
|
| 533 |
+
1.3
|
| 534 |
+
V,= 25 V
|
| 535 |
+
25 V
|
| 536 |
+
600
|
| 537 |
+
2000
|
| 538 |
+
0
|
| 539 |
+
-2
|
| 540 |
+
-1
|
| 541 |
+
0
|
| 542 |
+
1
|
| 543 |
+
2
|
| 544 |
+
0
|
| 545 |
+
5
|
| 546 |
+
10
|
| 547 |
+
15
|
| 548 |
+
20
|
| 549 |
+
25
|
| 550 |
+
-0.4
|
| 551 |
+
-0.2
|
| 552 |
+
0
|
| 553 |
+
0.2
|
| 554 |
+
0.4
|
| 555 |
+
μ.H (T)
|
| 556 |
+
Piezo Voltage (V)
|
| 557 |
+
Bias Voltage (V)a
|
| 558 |
+
b
|
| 559 |
+
d
|
| 560 |
+
2
|
| 561 |
+
-OT
|
| 562 |
+
-3 T
|
| 563 |
+
2
|
| 564 |
+
1
|
| 565 |
+
↑个个
|
| 566 |
+
E
|
| 567 |
+
(nA)
|
| 568 |
+
1.0
|
| 569 |
+
0
|
| 570 |
+
0
|
| 571 |
+
(GΩ)
|
| 572 |
+
-2
|
| 573 |
+
-1
|
| 574 |
+
R (GΩ)
|
| 575 |
+
V,= -5 V
|
| 576 |
+
0.026
|
| 577 |
+
0.89
|
| 578 |
+
ince
|
| 579 |
+
2
|
| 580 |
+
0.6
|
| 581 |
+
c
|
| 582 |
+
e
|
| 583 |
+
|
| 584 |
+
|
| 585 |
+
Figure 3 | Temperature dependent zero-field tunneling resistance switching. a, Tunneling
|
| 586 |
+
resistance as a function of piezo voltage. A large TMR change of ≈ 2700 % is observed between
|
| 587 |
+
the low and high strain states at 60 K. The change in magnetic state from AFM to FM interlayer
|
| 588 |
+
coupling is depicted by the inset spin diagram. b, Piezo-voltage-dependent tunneling resistance
|
| 589 |
+
at select temperatures from 30 K to 149 K. v, Temperature dependence of the tunneling
|
| 590 |
+
magnetoresistance ratio, defined as TMR (%) =
|
| 591 |
+
������
|
| 592 |
+
��
|
| 593 |
+
� 100. d, Magnetic-field dependent
|
| 594 |
+
tunneling resistance at 155 K in the low strain (blue) and high strain (red) states.
|
| 595 |
+
|
| 596 |
+
|
| 597 |
+
a
|
| 598 |
+
10
|
| 599 |
+
d
|
| 600 |
+
3.8
|
| 601 |
+
4
|
| 602 |
+
(×105 (
|
| 603 |
+
VBias = 0.02 V
|
| 604 |
+
R
|
| 605 |
+
T = 155 K
|
| 606 |
+
T= 139 K
|
| 607 |
+
6
|
| 608 |
+
3.2
|
| 609 |
+
= 0.05 V
|
| 610 |
+
R
|
| 611 |
+
2
|
| 612 |
+
-5 V
|
| 613 |
+
a
|
| 614 |
+
2.6
|
| 615 |
+
(×105 (
|
| 616 |
+
R
|
| 617 |
+
4
|
| 618 |
+
T = 149 K
|
| 619 |
+
27.5 V
|
| 620 |
+
VBias = 0.03 V
|
| 621 |
+
2
|
| 622 |
+
0
|
| 623 |
+
R
|
| 624 |
+
3
|
| 625 |
+
-5
|
| 626 |
+
0
|
| 627 |
+
10
|
| 628 |
+
15
|
| 629 |
+
20
|
| 630 |
+
-10
|
| 631 |
+
0
|
| 632 |
+
10
|
| 633 |
+
20
|
| 634 |
+
-2
|
| 635 |
+
-1
|
| 636 |
+
0
|
| 637 |
+
1
|
| 638 |
+
2
|
| 639 |
+
Piezo Voltage (V)
|
| 640 |
+
μ.H (T)
|
| 641 |
+
Piezo Voltage (V)a 10
|
| 642 |
+
b
|
| 643 |
+
C
|
| 644 |
+
104
|
| 645 |
+
(×109 Q2)
|
| 646 |
+
T= 30 K
|
| 647 |
+
T= 60 K
|
| 648 |
+
(%)
|
| 649 |
+
= 0.7 V
|
| 650 |
+
μ.H= O T
|
| 651 |
+
Ratio
|
| 652 |
+
8
|
| 653 |
+
R
|
| 654 |
+
103
|
| 655 |
+
0
|
| 656 |
+
=0.5V
|
| 657 |
+
TMR
|
| 658 |
+
a
|
| 659 |
+
(×107 (
|
| 660 |
+
3
|
| 661 |
+
6
|
| 662 |
+
T = 85 K
|
| 663 |
+
102
|
| 664 |
+
108 Q2)
|
| 665 |
+
V
|
| 666 |
+
= 0.35 V
|
| 667 |
+
20
|
| 668 |
+
60
|
| 669 |
+
100
|
| 670 |
+
140
|
| 671 |
+
R
|
| 672 |
+
Temperature (K)
|
| 673 |
+
|
| 674 |
+
Figure 4 | Strain control of multiple stable and stochastic layer-dependent magnetic
|
| 675 |
+
domains. a, Tunneling current over time as strain pulses of increasing amplitude are applied.
|
| 676 |
+
The inset shows the measurement scheme: a small pulse of amplitude VPAC is applied on top
|
| 677 |
+
of a static piezo voltage VPDC. The system is initialized by slowly increasing VPDC until the
|
| 678 |
+
magnetic phase transition starts to occur. As the pulse amplitude increases, the current
|
| 679 |
+
switching stabilizes into discrete states (denoted by the purple arrows). Additionally, the resting
|
| 680 |
+
current, i.e. ground state, can be changed by a sufficiently large pulse. b, Tunneling current
|
| 681 |
+
over time as the static piezo voltage, VPDC is increased (red arrow) and then decreased (blue
|
| 682 |
+
arrow) by .014 V. No strain pulse is applied. Bottom: Finer time resolution data of the domain
|
| 683 |
+
fluctuations observed in the top panel. c, Schematic of strain tuning between magnetic domains.
|
| 684 |
+
A sufficiently high pulse, VPAC, will flip between AFM and FM domains (left). The fine
|
| 685 |
+
adjustment of the static strain lowers the energy difference between AFM and FM domains,
|
| 686 |
+
creating a metastable state with stochastic domain switching (right). d, Bias dependence of the
|
| 687 |
+
switching rate in the metastable state. The piezo voltage is kept constant during the
|
| 688 |
+
measurement. Data from panels A-D are taken at 60 K. e, Response function of a sensitive
|
| 689 |
+
magnetic domain as a function of static piezo voltage at a temperature of 85 K. A value of
|
| 690 |
+
either 0 or 1 indicates a stable domain. f, Tunneling current (top) and converted binary
|
| 691 |
+
sequence (bottom) over time when the response function is near 0.5, indicating equal amount
|
| 692 |
+
of fluctuations between the parallel and antiparallel configuration. g, P-values returned by the
|
| 693 |
+
NIST random number test suite applied to the binary sequence from f. The black dashed line
|
| 694 |
+
indicates a p-value of .01, the threshold for passing the specific test. The sampling time was
|
| 695 |
+
.1760 seconds (see Supplementary Information).
|
| 696 |
+
|
| 697 |
+
Time (sec)
|
| 698 |
+
Time (sec)
|
| 699 |
+
c
|
| 700 |
+
e
|
| 701 |
+
9
|
| 702 |
+
V
|
| 703 |
+
PAC
|
| 704 |
+
Value
|
| 705 |
+
0.1
|
| 706 |
+
p
|
| 707 |
+
P
|
| 708 |
+
8
|
| 709 |
+
T = 85 K
|
| 710 |
+
0.01
|
| 711 |
+
0
|
| 712 |
+
0.02
|
| 713 |
+
0.04
|
| 714 |
+
V
|
| 715 |
+
PDC
|
| 716 |
+
Stable
|
| 717 |
+
AV
|
| 718 |
+
(V)
|
| 719 |
+
Stochastic
|
| 720 |
+
PDC
|
| 721 |
+
TestTime (sec)
|
| 722 |
+
b
|
| 723 |
+
d
|
| 724 |
+
48
|
| 725 |
+
3
|
| 726 |
+
It (nA)
|
| 727 |
+
4
|
| 728 |
+
(nA)
|
| 729 |
+
2
|
| 730 |
+
46
|
| 731 |
+
Current (
|
| 732 |
+
500
|
| 733 |
+
1500
|
| 734 |
+
2500
|
| 735 |
+
3500
|
| 736 |
+
3.6
|
| 737 |
+
Switching
|
| 738 |
+
6.5m
|
| 739 |
+
Binary
|
| 740 |
+
6.4
|
| 741 |
+
0
|
| 742 |
+
0.54
|
| 743 |
+
0.58
|
| 744 |
+
0.62
|
| 745 |
+
25
|
| 746 |
+
0
|
| 747 |
+
20
|
| 748 |
+
40
|
| 749 |
+
60
|
| 750 |
+
80
|
| 751 |
+
100a
|
| 752 |
+
V
|
| 753 |
+
PDC
|
| 754 |
+
。= 250 mV
|
| 755 |
+
6.4F
|
| 756 |
+
(nA)
|
| 757 |
+
PAC
|
| 758 |
+
6.2
|
| 759 |
+
Current (
|
| 760 |
+
9400
|
| 761 |
+
9500
|
| 762 |
+
9600
|
| 763 |
+
.=5mV
|
| 764 |
+
PAC
|
| 765 |
+
6
|
| 766 |
+
2000
|
| 767 |
+
6000
|
| 768 |
+
10000
|
| 769 |
+
14000
|
| 770 |
+
18000
|
| 771 |
+
22000
|
| 772 |
+
|
| 773 |
+
1
|
| 774 |
+
|
| 775 |
+
Extended Data for
|
| 776 |
+
|
| 777 |
+
Strain-programmable van der Waals magnetic tunnel junctions
|
| 778 |
+
|
| 779 |
+
Authors: John Cenker1, Dmitry Ovchinnikov1, Harvey Yang1, Daniel G. Chica2, Catherine Zhu1,
|
| 780 |
+
Jiaqi Cai1, Geoffrey Diederich1,3, Zhaoyu Liu1, Xiaoyang Zhu2, Xavier Roy2, Ting Cao4,
|
| 781 |
+
Matthew W. Daniels5, Jiun-Haw Chu1, Di Xiao4,1, Xiaodong Xu1,4,*
|
| 782 |
+
|
| 783 |
+
|
| 784 |
+
1 Department of Physics, University of Washington, Seattle, Washington 98195, USA
|
| 785 |
+
2 Department of Chemistry, Columbia University, New York, NY 10027 USA
|
| 786 |
+
3 Intelligence Community Postdoctoral Research Fellowship Program, University of Washington,
|
| 787 |
+
Seattle, WA, USA
|
| 788 |
+
4 Department of Materials Science and Engineering, University of Washington, Seattle,
|
| 789 |
+
Washington 98195, USA
|
| 790 |
+
5 Physical Measurement Laboratory, National Institute of Standards and Technology,
|
| 791 |
+
Gaithersburg, MD, 20899, USA
|
| 792 |
+
|
| 793 |
+
*Correspondence to: xuxd@uw.edu
|
| 794 |
+
|
| 795 |
+
|
| 796 |
+
|
| 797 |
+
|
| 798 |
+
|
| 799 |
+
|
| 800 |
+
2
|
| 801 |
+
|
| 802 |
+
|
| 803 |
+
|
| 804 |
+
|
| 805 |
+
|
| 806 |
+
|
| 807 |
+
|
| 808 |
+
|
| 809 |
+
|
| 810 |
+
|
| 811 |
+
|
| 812 |
+
|
| 813 |
+
|
| 814 |
+
|
| 815 |
+
|
| 816 |
+
|
| 817 |
+
|
| 818 |
+
|
| 819 |
+
|
| 820 |
+
|
| 821 |
+
|
| 822 |
+
|
| 823 |
+
|
| 824 |
+
|
| 825 |
+
Extended Data Fig. 1 | Calibration of strain through Raman spectroscopy. a, Raman scattering
|
| 826 |
+
from the P3 phonon taken on the tunnel junction region at a piezo voltage of 0 V. A linear
|
| 827 |
+
background originating from the polyimide photoluminescence is subtracted. The narrow linewidth
|
| 828 |
+
indicates a homogenous strain. b, Raman intensity plot as a function of piezo voltage. The beamspot
|
| 829 |
+
is kept on the junction as the piezo voltage is continually increased. c, Measured strain as a function
|
| 830 |
+
of the applied voltage to the strain cell. The strain is calculated by fitting the data from b with
|
| 831 |
+
Lorentzian fits and then comparing the peak position to the unstrained value of 346 cm-1 using a
|
| 832 |
+
strain shift rate of 4.2 cm-1/% as reported in previous studies. We found that there was a built-in
|
| 833 |
+
strain of ~ 0.9 % at the lowest piezo voltage used at this temperature.
|
| 834 |
+
|
| 835 |
+
|
| 836 |
+
Intens
|
| 837 |
+
V
|
| 838 |
+
Rama
|
| 839 |
+
S
|
| 840 |
+
1.1
|
| 841 |
+
336
|
| 842 |
+
0
|
| 843 |
+
Intensity
|
| 844 |
+
0
|
| 845 |
+
300
|
| 846 |
+
0.9
|
| 847 |
+
332
|
| 848 |
+
320
|
| 849 |
+
340
|
| 850 |
+
360
|
| 851 |
+
-5
|
| 852 |
+
0
|
| 853 |
+
5
|
| 854 |
+
10
|
| 855 |
+
15
|
| 856 |
+
2025
|
| 857 |
+
-5
|
| 858 |
+
0
|
| 859 |
+
5
|
| 860 |
+
1015
|
| 861 |
+
25
|
| 862 |
+
Raman Shift (cm-1)
|
| 863 |
+
Piezo Voltage (V)
|
| 864 |
+
Piezo Voltage (V)a
|
| 865 |
+
b
|
| 866 |
+
c
|
| 867 |
+
300
|
| 868 |
+
1.7
|
| 869 |
+
348
|
| 870 |
+
Shift (cm-1)
|
| 871 |
+
(counts)
|
| 872 |
+
1.5
|
| 873 |
+
344
|
| 874 |
+
200
|
| 875 |
+
(%)
|
| 876 |
+
train
|
| 877 |
+
1.3
|
| 878 |
+
340
|
| 879 |
+
|
| 880 |
+
3
|
| 881 |
+
|
| 882 |
+
|
| 883 |
+
|
| 884 |
+
|
| 885 |
+
|
| 886 |
+
|
| 887 |
+
|
| 888 |
+
|
| 889 |
+
|
| 890 |
+
|
| 891 |
+
|
| 892 |
+
|
| 893 |
+
|
| 894 |
+
|
| 895 |
+
|
| 896 |
+
|
| 897 |
+
|
| 898 |
+
|
| 899 |
+
|
| 900 |
+
|
| 901 |
+
|
| 902 |
+
|
| 903 |
+
|
| 904 |
+
|
| 905 |
+
|
| 906 |
+
|
| 907 |
+
|
| 908 |
+
|
| 909 |
+
|
| 910 |
+
|
| 911 |
+
|
| 912 |
+
|
| 913 |
+
|
| 914 |
+
|
| 915 |
+
|
| 916 |
+
|
| 917 |
+
|
| 918 |
+
|
| 919 |
+
|
| 920 |
+
|
| 921 |
+
|
| 922 |
+
|
| 923 |
+
|
| 924 |
+
|
| 925 |
+
Extended Data Fig. 2 | Magnetoresistance sweeps at select piezo voltages. a-d,
|
| 926 |
+
Magnetoresistance sweeps as the field is swept down (blue) and up (black) at select piezo
|
| 927 |
+
voltages through the strain-induced layered magnetization flipping. At low strain (a), large
|
| 928 |
+
negative magnetoresistance is observed, consistent with AFM order, while small positive
|
| 929 |
+
magnetoresistance is observed in the high strain induced FM state (d). In between, complex
|
| 930 |
+
and hysteretic magnetic domain behavior is observed.
|
| 931 |
+
|
| 932 |
+
|
| 933 |
+
3.8
|
| 934 |
+
V = 10V
|
| 935 |
+
= 25 V
|
| 936 |
+
b
|
| 937 |
+
d
|
| 938 |
+
Resistance (× 107 ohm)
|
| 939 |
+
Resistance (× 107 ohm)
|
| 940 |
+
6
|
| 941 |
+
3.7
|
| 942 |
+
5
|
| 943 |
+
3.6
|
| 944 |
+
4
|
| 945 |
+
3.5
|
| 946 |
+
3
|
| 947 |
+
3.41
|
| 948 |
+
2
|
| 949 |
+
-1
|
| 950 |
+
0
|
| 951 |
+
2
|
| 952 |
+
-2
|
| 953 |
+
-1
|
| 954 |
+
0
|
| 955 |
+
μ。H(T)
|
| 956 |
+
μ。H(T)8
|
| 957 |
+
a
|
| 958 |
+
/=0V
|
| 959 |
+
3.5
|
| 960 |
+
= 15V
|
| 961 |
+
Resistance (× 108 ohm)
|
| 962 |
+
6
|
| 963 |
+
3.4
|
| 964 |
+
4
|
| 965 |
+
3.3
|
| 966 |
+
-μ.H
|
| 967 |
+
+μ.H
|
| 968 |
+
2
|
| 969 |
+
3.2
|
| 970 |
+
0
|
| 971 |
+
3.1
|
| 972 |
+
|
| 973 |
+
4
|
| 974 |
+
|
| 975 |
+
|
| 976 |
+
|
| 977 |
+
|
| 978 |
+
|
| 979 |
+
|
| 980 |
+
|
| 981 |
+
|
| 982 |
+
|
| 983 |
+
|
| 984 |
+
|
| 985 |
+
|
| 986 |
+
|
| 987 |
+
|
| 988 |
+
|
| 989 |
+
|
| 990 |
+
|
| 991 |
+
|
| 992 |
+
|
| 993 |
+
|
| 994 |
+
|
| 995 |
+
|
| 996 |
+
|
| 997 |
+
|
| 998 |
+
|
| 999 |
+
|
| 1000 |
+
|
| 1001 |
+
|
| 1002 |
+
|
| 1003 |
+
|
| 1004 |
+
Extended Data Fig. 3 | Magneto-photoluminescence mapping of magnetic domains. a-b,
|
| 1005 |
+
Comparison of tunneling magnetoresistance (a) and integrated intensity from magneto-
|
| 1006 |
+
photoluminescence (PL) (b) measurements at the same piezo voltage. The correlation of the
|
| 1007 |
+
curves highlights the connection of the interlayer magnetic coupling to both electronic
|
| 1008 |
+
tunneling and exciton luminescence. c, Optical image of the device with different spots labeled
|
| 1009 |
+
by different colors. d-g, Magneto-PL sweeps at each of the spots labeled in (c). The similarities
|
| 1010 |
+
between spots separated by several microns indicates the presence of vertical, rather than
|
| 1011 |
+
lateral, magnetic domains.
|
| 1012 |
+
|
| 1013 |
+
-1
|
| 1014 |
+
-0.5
|
| 1015 |
+
0
|
| 1016 |
+
0.5
|
| 1017 |
+
Energy (eV)
|
| 1018 |
+
1.4
|
| 1019 |
+
g
|
| 1020 |
+
Energy (eV)
|
| 1021 |
+
1.4
|
| 1022 |
+
μ.H (T)
|
| 1023 |
+
1000
|
| 1024 |
+
1030
|
| 1025 |
+
c
|
| 1026 |
+
PL (a.u.)
|
| 1027 |
+
PL (a.u.)
|
| 1028 |
+
1.3
|
| 1029 |
+
Energy (eV)
|
| 1030 |
+
1.4
|
| 1031 |
+
Energy (eV)
|
| 1032 |
+
1.4
|
| 1033 |
+
600
|
| 1034 |
+
600
|
| 1035 |
+
1.3
|
| 1036 |
+
1.3
|
| 1037 |
+
-1
|
| 1038 |
+
0
|
| 1039 |
+
1
|
| 1040 |
+
-1
|
| 1041 |
+
0
|
| 1042 |
+
1
|
| 1043 |
+
μH (T)
|
| 1044 |
+
μ.H (T)Energy (eV)
|
| 1045 |
+
1.4
|
| 1046 |
+
f
|
| 1047 |
+
Energy (eV)
|
| 1048 |
+
1.4
|
| 1049 |
+
6
|
| 1050 |
+
TMR
|
| 1051 |
+
5
|
| 1052 |
+
910
|
| 1053 |
+
1000
|
| 1054 |
+
4
|
| 1055 |
+
PL
|
| 1056 |
+
PL (a.u.)
|
| 1057 |
+
1.3
|
| 1058 |
+
(a.u.)
|
| 1059 |
+
1.3
|
| 1060 |
+
R
|
| 1061 |
+
3
|
| 1062 |
+
1.4
|
| 1063 |
+
Energy (eV)
|
| 1064 |
+
1.4
|
| 1065 |
+
8
|
| 1066 |
+
Energy (eV)
|
| 1067 |
+
b
|
| 1068 |
+
(a. u.)
|
| 1069 |
+
PL
|
| 1070 |
+
7
|
| 1071 |
+
600
|
| 1072 |
+
600
|
| 1073 |
+
Int. intensity
|
| 1074 |
+
1.3
|
| 1075 |
+
1.3
|
| 1076 |
+
9
|
| 1077 |
+
-1
|
| 1078 |
+
0
|
| 1079 |
+
1
|
| 1080 |
+
-1
|
| 1081 |
+
0
|
| 1082 |
+
μ,H (T)
|
| 1083 |
+
μ。H (T)
|
| 1084 |
+
|
| 1085 |
+
5
|
| 1086 |
+
|
| 1087 |
+
|
| 1088 |
+
|
| 1089 |
+
|
| 1090 |
+
|
| 1091 |
+
|
| 1092 |
+
|
| 1093 |
+
|
| 1094 |
+
|
| 1095 |
+
|
| 1096 |
+
|
| 1097 |
+
|
| 1098 |
+
|
| 1099 |
+
|
| 1100 |
+
|
| 1101 |
+
|
| 1102 |
+
|
| 1103 |
+
|
| 1104 |
+
|
| 1105 |
+
|
| 1106 |
+
|
| 1107 |
+
|
| 1108 |
+
|
| 1109 |
+
|
| 1110 |
+
|
| 1111 |
+
|
| 1112 |
+
|
| 1113 |
+
|
| 1114 |
+
|
| 1115 |
+
|
| 1116 |
+
|
| 1117 |
+
|
| 1118 |
+
|
| 1119 |
+
|
| 1120 |
+
|
| 1121 |
+
Extended Data Fig. 4 | Strain pulse data in the purely FM and AFM states. a, Strain
|
| 1122 |
+
pulse amplitude dependence in the purely FM state. As VPAC is increased from 0 to 0.5 V, a
|
| 1123 |
+
continuous change in the current is observed. The calculated gauge factor is ~ 5. b, Change in
|
| 1124 |
+
tunneling current over time as a strain pulse of 0.5 V is applied in the AFM state. Due to the
|
| 1125 |
+
very large resistance, the effect of pulses with smaller amplitude cannot be resolved. A gauge
|
| 1126 |
+
factor of ~ 30 is calculated, but with a large uncertainty due to the high resistance in the AFM
|
| 1127 |
+
state. No changes to the static current are observed in either FM or AFM states.
|
| 1128 |
+
|
| 1129 |
+
|
| 1130 |
+
.= 0.5 V
|
| 1131 |
+
Current (nA)
|
| 1132 |
+
3.2
|
| 1133 |
+
AFM State (Vppc = -4.5 V)
|
| 1134 |
+
3.1
|
| 1135 |
+
9800
|
| 1136 |
+
9900
|
| 1137 |
+
10000
|
| 1138 |
+
10100
|
| 1139 |
+
Time (sec)a
|
| 1140 |
+
=OV
|
| 1141 |
+
: 0.5.V
|
| 1142 |
+
75.2
|
| 1143 |
+
Current (nA)
|
| 1144 |
+
75
|
| 1145 |
+
FM State (Vppc = 24.5 V)
|
| 1146 |
+
1000
|
| 1147 |
+
3000
|
| 1148 |
+
5000
|
| 1149 |
+
b
|
| 1150 |
+
|
| 1151 |
+
6
|
| 1152 |
+
|
| 1153 |
+
|
| 1154 |
+
|
| 1155 |
+
|
| 1156 |
+
|
| 1157 |
+
|
| 1158 |
+
|
| 1159 |
+
|
| 1160 |
+
|
| 1161 |
+
|
| 1162 |
+
|
| 1163 |
+
|
| 1164 |
+
|
| 1165 |
+
|
| 1166 |
+
|
| 1167 |
+
|
| 1168 |
+
|
| 1169 |
+
|
| 1170 |
+
|
| 1171 |
+
|
| 1172 |
+
|
| 1173 |
+
|
| 1174 |
+
|
| 1175 |
+
|
| 1176 |
+
|
| 1177 |
+
|
| 1178 |
+
|
| 1179 |
+
|
| 1180 |
+
|
| 1181 |
+
|
| 1182 |
+
|
| 1183 |
+
|
| 1184 |
+
|
| 1185 |
+
Extended Data Fig. 5 | Bias-polarity-dependent stochastic switching indicates asymmetric
|
| 1186 |
+
vertical magnetic domain structure. a-b, Tunneling current over time of a metastable domain
|
| 1187 |
+
with a positive (a) and negative (b) bias applied to the MTJ. Despite a similar magnitude of
|
| 1188 |
+
current, no switching is observed under negative bias, ruling out heating effects. Instead, the
|
| 1189 |
+
data is consistent with an asymmetric vertical domain structure, as illustrated in (c). A plausible
|
| 1190 |
+
scenario is that when a positive voltage is applied, the FM layers polarize the tunneling
|
| 1191 |
+
electrons. These spin polarized electrons apply a spin-transfer torque like effect to the AFM
|
| 1192 |
+
layers, enhancing the stochastic switching. On the other hand, when a negative bias is applied,
|
| 1193 |
+
the electrons are not highly polarized and do not exert a spin-transfer torque on the FM layers.
|
| 1194 |
+
|
| 1195 |
+
|
| 1196 |
+
Current (nA)
|
| 1197 |
+
6.8
|
| 1198 |
+
6.74
|
| 1199 |
+
-620.5 mV
|
| 1200 |
+
Bias
|
| 1201 |
+
0
|
| 1202 |
+
100
|
| 1203 |
+
200
|
| 1204 |
+
300
|
| 1205 |
+
Time (sec)a
|
| 1206 |
+
Current (nA)
|
| 1207 |
+
6.5
|
| 1208 |
+
6.4
|
| 1209 |
+
= 620.5 mV
|
| 1210 |
+
6.3
|
| 1211 |
+
Bias
|
| 1212 |
+
6
|
| 1213 |
+
|
| 1214 |
+
7
|
| 1215 |
+
|
| 1216 |
+
|
| 1217 |
+
|
| 1218 |
+
|
| 1219 |
+
|
| 1220 |
+
|
| 1221 |
+
Extended Data Fig. 6 | Strain switching in a six-layer MTJ. a, Magnetoresistance sweeps
|
| 1222 |
+
of a MTJ with a six-layer CrSBr tunnel barrier as the piezo voltage Vp is increased from 32.5
|
| 1223 |
+
V to 75 V. The domain behavior at piezo voltages between the low-strain AFM (32.5 V) and
|
| 1224 |
+
high-strain FM (75 V) states is much simpler than the ~ 16-layer device presented in the main
|
| 1225 |
+
text, providing additional evidence that vertical, layer-dependent domains are the origin of the
|
| 1226 |
+
complex hysteretic domains behavior during the magnetic phase transition. The magnetic field
|
| 1227 |
+
is applied along the a axis at a temperature of 20 K. b-c, Magnetoresistance sweeps in the low
|
| 1228 |
+
strain AFM (b) and high strain FM (c) states, showing the characteristic switching from
|
| 1229 |
+
negative to positive MR. The optical image of the device is shown inset in b (scale bar 5 µm).
|
| 1230 |
+
d-e, Resistance over time at select piezo voltages during the magnetic phase transition.
|
| 1231 |
+
Stochastic domain switching (d) which can be stabilized by slightly increasing strain e) are
|
| 1232 |
+
observed. These results highlight the potential for extending the strain-programmable vdW
|
| 1233 |
+
MTJs to the 2D limit.
|
| 1234 |
+
|
| 1235 |
+
|
| 1236 |
+
Resistal
|
| 1237 |
+
(MQ
|
| 1238 |
+
64
|
| 1239 |
+
V,= 54.3 V
|
| 1240 |
+
Resistance (M)
|
| 1241 |
+
Resistance (
|
| 1242 |
+
2
|
| 1243 |
+
1.68
|
| 1244 |
+
32.5 V-
|
| 1245 |
+
1.56
|
| 1246 |
+
R
|
| 1247 |
+
7.5V
|
| 1248 |
+
1.64
|
| 1249 |
+
0
|
| 1250 |
+
0
|
| 1251 |
+
7880
|
| 1252 |
+
7960
|
| 1253 |
+
8040
|
| 1254 |
+
μ.H(T)
|
| 1255 |
+
μ。H(T)
|
| 1256 |
+
Time (sec)a
|
| 1257 |
+
b
|
| 1258 |
+
d
|
| 1259 |
+
1.73
|
| 1260 |
+
75 V
|
| 1261 |
+
= 32.5 V
|
| 1262 |
+
(MQ)
|
| 1263 |
+
V. = 52.9 V
|
| 1264 |
+
Resistance (
|
| 1265 |
+
4
|
| 1266 |
+
Resistance
|
| 1267 |
+
1.69
|
| 1268 |
+
1.45
|
| 1269 |
+
(M2)
|
| 1270 |
+
1.65
|
| 1271 |
+
+
|
| 1272 |
+
1.35
|
| 1273 |
+
3
|
| 1274 |
+
nce
|
| 1275 |
+
-1
|
| 1276 |
+
0
|
| 1277 |
+
1
|
| 1278 |
+
3280
|
| 1279 |
+
3360
|
| 1280 |
+
3440
|
| 1281 |
+
c
|
| 1282 |
+
e
|
| 1283 |
+
|
| 1284 |
+
1
|
| 1285 |
+
|
| 1286 |
+
Supplementary information for
|
| 1287 |
+
|
| 1288 |
+
Strain-programmable van der Waals magnetic tunnel junctions
|
| 1289 |
+
|
| 1290 |
+
Authors: John Cenker1, Dmitry Ovchinnikov1, Harvey Yang1, Daniel G. Chica2, Catherine Zhu1,
|
| 1291 |
+
Jiaqi Cai1, Geoffrey Diederich1,3, Zhaoyu Liu1, Xiaoyang Zhu2, Xavier Roy2, Ting Cao4,
|
| 1292 |
+
Matthew W. Daniels5, Jiun-Haw Chu1, Di Xiao4,1, Xiaodong Xu1,4,*
|
| 1293 |
+
|
| 1294 |
+
|
| 1295 |
+
1 Department of Physics, University of Washington, Seattle, Washington 98195, USA
|
| 1296 |
+
2 Department of Chemistry, Columbia University, New York, NY 10027 USA
|
| 1297 |
+
3 Intelligence Community Postdoctoral Research Fellowship Program, University of Washington,
|
| 1298 |
+
Seattle, WA, USA
|
| 1299 |
+
4 Department of Materials Science and Engineering, University of Washington, Seattle,
|
| 1300 |
+
Washington 98195, USA
|
| 1301 |
+
5 Physical Measurement Laboratory, National Institute of Standards and Technology,
|
| 1302 |
+
Gaithersburg, MD, 20899, USA
|
| 1303 |
+
|
| 1304 |
+
*Correspondence to: xuxd@uw.edu
|
| 1305 |
+
|
| 1306 |
+
|
| 1307 |
+
|
| 1308 |
+
|
| 1309 |
+
|
| 1310 |
+
|
| 1311 |
+
2
|
| 1312 |
+
|
| 1313 |
+
Supplementary Text: Additional stochasticity analysis of switching data taken near ρ = 0.5
|
| 1314 |
+
Since the tunneling current is sampled much faster than the switching rate (~ .14 sec), switching
|
| 1315 |
+
data collected over 200 seconds was downsampled and tested using 15 tests from the NIST test
|
| 1316 |
+
suite1. Maurer’s Universal Test was excluded since the binary sequence was not long enough. The
|
| 1317 |
+
full sampling time dependence is shown below, using a standard threshold p-value of .01. The grey
|
| 1318 |
+
line indicates the sequence passed all of the 15 considered tests. The red line indicates the average
|
| 1319 |
+
domain switching time obtained by dividing the total number of switches by the total time window.
|
| 1320 |
+
|
| 1321 |
+
In addition to the NIST test suite, we analyzed the dwell time, i.e. the time between switches, of
|
| 1322 |
+
the 0 and 1 states. The extracted dwell times are plotted as a histogram for the 0 and 1 states,
|
| 1323 |
+
following an exponential envelope as expected for a Poisson process.
|
| 1324 |
+
|
| 1325 |
+
|
| 1326 |
+
We then plot the logarithm of the histogram bin counts (N) versus the dwell time:
|
| 1327 |
+
|
| 1328 |
+
80
|
| 1329 |
+
80
|
| 1330 |
+
40
|
| 1331 |
+
40
|
| 1332 |
+
0
|
| 1333 |
+
0
|
| 1334 |
+
0
|
| 1335 |
+
0.2
|
| 1336 |
+
0.4
|
| 1337 |
+
0.6
|
| 1338 |
+
0.8
|
| 1339 |
+
0
|
| 1340 |
+
0.2
|
| 1341 |
+
0.4
|
| 1342 |
+
0.6
|
| 1343 |
+
0.8
|
| 1344 |
+
Time (sec)
|
| 1345 |
+
Time (sec)Zero State Dwell Time
|
| 1346 |
+
One State Dwell Time
|
| 1347 |
+
200
|
| 1348 |
+
200
|
| 1349 |
+
160
|
| 1350 |
+
160
|
| 1351 |
+
120
|
| 1352 |
+
120
|
| 1353 |
+
untsTests
|
| 1354 |
+
5
|
| 1355 |
+
0
|
| 1356 |
+
0
|
| 1357 |
+
0.05
|
| 1358 |
+
0.1
|
| 1359 |
+
0.15
|
| 1360 |
+
0.2
|
| 1361 |
+
Sampling time (sec)15
|
| 1362 |
+
10
|
| 1363 |
+
Passed
|
| 1364 |
+
|
| 1365 |
+
3
|
| 1366 |
+
|
| 1367 |
+
|
| 1368 |
+
From the linear fits, we find that the characteristic lifetime, τ, of the 0 and 1 states are τ0 = 159 ±
|
| 1369 |
+
9 ms and τ1 = 151 ± 9 ms, respectively, where the uncertainty is determined by the standard
|
| 1370 |
+
deviation of the linear fit. Based on this analysis and the NIST test suite results, we conclude that
|
| 1371 |
+
the strained MTJ can generate binary sequences with a high degree of randomness.
|
| 1372 |
+
|
| 1373 |
+
|
| 1374 |
+
|
| 1375 |
+
|
| 1376 |
+
|
| 1377 |
+
|
| 1378 |
+
9
|
| 1379 |
+
2
|
| 1380 |
+
0
|
| 1381 |
+
0.1
|
| 1382 |
+
0.3
|
| 1383 |
+
0.7
|
| 1384 |
+
0.9
|
| 1385 |
+
0.1
|
| 1386 |
+
0.3
|
| 1387 |
+
0.5
|
| 1388 |
+
0.7
|
| 1389 |
+
0.9
|
| 1390 |
+
0.5
|
| 1391 |
+
Time (sec)
|
| 1392 |
+
Time (sec)Zero State
|
| 1393 |
+
One State
|
| 1394 |
+
6
|
| 1395 |
+
5
|
| 1396 |
+
4
|
| 1397 |
+
3
|
| 1398 |
+
g(N)
|
| 1399 |
+
|
| 1400 |
+
4
|
| 1401 |
+
|
| 1402 |
+
References:
|
| 1403 |
+
|
| 1404 |
+
1.
|
| 1405 |
+
Ang, S., Chuchill, S., NIST Test Suite, GitHub Repository,
|
| 1406 |
+
https://github.com/stevenang/randomness_testsuite (2017)
|
| 1407 |
+
|
7tE2T4oBgHgl3EQfPgY3/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
8dAzT4oBgHgl3EQfgfzo/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:875e25fa1e785b5707e936425d9f8232958a7f9fa738cdbb9d89dbc711282c8d
|
| 3 |
+
size 106544
|
8tFRT4oBgHgl3EQfpzcC/content/2301.13614v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5041e4cceb98e7c299f439c6214fd71d2cb6bb53c8801d392958f71f8880f6d7
|
| 3 |
+
size 2268832
|
8tFRT4oBgHgl3EQfpzcC/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d0238f3291ec63c05f18305eabf803d35d46f36effa5e2a4da55dce85b7622c9
|
| 3 |
+
size 2490413
|
9dA0T4oBgHgl3EQfO_9E/content/tmp_files/2301.02168v1.pdf.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
9dA0T4oBgHgl3EQfO_9E/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
AdFQT4oBgHgl3EQfMTYr/content/tmp_files/2301.13267v1.pdf.txt
ADDED
|
@@ -0,0 +1,1613 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
A R C H I S O U N D : A U D I O G E N E R AT I O N W I T H D I F F U S I O N
|
| 2 |
+
flavio schneider
|
| 3 |
+
Master’s Thesis
|
| 4 |
+
Supervised by Zhijing Jin, Prof. Bernhard Schölkopf
|
| 5 |
+
ETH Zurich
|
| 6 |
+
January 2023
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
A B S T R A C T
|
| 10 |
+
The recent surge in popularity of diffusion models for image gener-
|
| 11 |
+
ation has brought new attention to the potential of these models in
|
| 12 |
+
other areas of media generation. One area that has yet to be fully ex-
|
| 13 |
+
plored is the application of diffusion models to audio generation. Au-
|
| 14 |
+
dio generation requires an understanding of multiple aspects, such
|
| 15 |
+
as the temporal dimension, long term structure, multiple layers of
|
| 16 |
+
overlapping sounds, and the nuances that only trained listeners can
|
| 17 |
+
detect. In this work, we investigate the potential of diffusion models
|
| 18 |
+
for audio generation. We propose a set of models to tackle multiple
|
| 19 |
+
aspects, including a new method for text-conditional latent audio dif-
|
| 20 |
+
fusion with stacked 1D U-Nets, that can generate multiple minutes
|
| 21 |
+
of music from a textual description. For each model, we make an
|
| 22 |
+
effort to maintain reasonable inference speed, targeting real-time on
|
| 23 |
+
a single consumer GPU. In addition to trained models, we provide a
|
| 24 |
+
collection of open source libraries with the hope of simplifying future
|
| 25 |
+
work in the field. Samples can be found at bit.ly/audio-diffusion.
|
| 26 |
+
iii
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
C O N T E N T S
|
| 30 |
+
1
|
| 31 |
+
introduction
|
| 32 |
+
1
|
| 33 |
+
1.1
|
| 34 |
+
Audio Generation
|
| 35 |
+
1
|
| 36 |
+
1.2
|
| 37 |
+
Challenges
|
| 38 |
+
1
|
| 39 |
+
1.3
|
| 40 |
+
Existing Methods
|
| 41 |
+
2
|
| 42 |
+
1.4
|
| 43 |
+
Research Questions
|
| 44 |
+
2
|
| 45 |
+
1.5
|
| 46 |
+
Contributions
|
| 47 |
+
4
|
| 48 |
+
1.5.1
|
| 49 |
+
Models
|
| 50 |
+
4
|
| 51 |
+
1.5.2
|
| 52 |
+
Libraries
|
| 53 |
+
4
|
| 54 |
+
1.6
|
| 55 |
+
Structure of the Thesis
|
| 56 |
+
5
|
| 57 |
+
2
|
| 58 |
+
audio representation
|
| 59 |
+
7
|
| 60 |
+
2.1
|
| 61 |
+
Desirable Properties
|
| 62 |
+
7
|
| 63 |
+
2.1.1
|
| 64 |
+
Compressibility
|
| 65 |
+
7
|
| 66 |
+
2.1.2
|
| 67 |
+
Decodability
|
| 68 |
+
7
|
| 69 |
+
2.1.3
|
| 70 |
+
Diffuseability
|
| 71 |
+
7
|
| 72 |
+
2.2
|
| 73 |
+
Waveform
|
| 74 |
+
8
|
| 75 |
+
2.3
|
| 76 |
+
Spectrograms
|
| 77 |
+
8
|
| 78 |
+
2.3.1
|
| 79 |
+
STFT
|
| 80 |
+
8
|
| 81 |
+
2.3.2
|
| 82 |
+
MEL
|
| 83 |
+
10
|
| 84 |
+
3
|
| 85 |
+
existing diffusion methods
|
| 86 |
+
11
|
| 87 |
+
3.1
|
| 88 |
+
DDPM-Diffusion
|
| 89 |
+
11
|
| 90 |
+
3.1.1
|
| 91 |
+
Noising (0 → t)
|
| 92 |
+
12
|
| 93 |
+
3.1.2
|
| 94 |
+
Denoising (t − 1 ← t)
|
| 95 |
+
13
|
| 96 |
+
3.1.3
|
| 97 |
+
Training Objective
|
| 98 |
+
13
|
| 99 |
+
3.1.4
|
| 100 |
+
Sampling
|
| 101 |
+
14
|
| 102 |
+
3.1.5
|
| 103 |
+
Limitations
|
| 104 |
+
14
|
| 105 |
+
3.2
|
| 106 |
+
DDIM
|
| 107 |
+
14
|
| 108 |
+
3.3
|
| 109 |
+
V-Diffusion
|
| 110 |
+
15
|
| 111 |
+
3.3.1
|
| 112 |
+
Noising (0 → σt)
|
| 113 |
+
15
|
| 114 |
+
3.3.2
|
| 115 |
+
Denoising (σt−1 ← σt)
|
| 116 |
+
16
|
| 117 |
+
3.3.3
|
| 118 |
+
Training Objective
|
| 119 |
+
16
|
| 120 |
+
3.3.4
|
| 121 |
+
Sampling (σ0 = 0 ← · · · ← σt−1 ← σt =
|
| 122 |
+
1)
|
| 123 |
+
16
|
| 124 |
+
4
|
| 125 |
+
architectures
|
| 126 |
+
17
|
| 127 |
+
4.1
|
| 128 |
+
Our a-unet Library
|
| 129 |
+
17
|
| 130 |
+
4.1.1
|
| 131 |
+
Background of U-Net
|
| 132 |
+
17
|
| 133 |
+
4.1.2
|
| 134 |
+
U-Net Block
|
| 135 |
+
17
|
| 136 |
+
4.1.3
|
| 137 |
+
Items
|
| 138 |
+
19
|
| 139 |
+
4.1.4
|
| 140 |
+
Plugins
|
| 141 |
+
20
|
| 142 |
+
4.2
|
| 143 |
+
Our audio-encoders-pytorch Library
|
| 144 |
+
21
|
| 145 |
+
5
|
| 146 |
+
models
|
| 147 |
+
23
|
| 148 |
+
5.1
|
| 149 |
+
Overview
|
| 150 |
+
23
|
| 151 |
+
5.2
|
| 152 |
+
Diffusion Unconditional Generator
|
| 153 |
+
23
|
| 154 |
+
v
|
| 155 |
+
|
| 156 |
+
vi
|
| 157 |
+
contents
|
| 158 |
+
5.2.1
|
| 159 |
+
Motivation
|
| 160 |
+
23
|
| 161 |
+
5.2.2
|
| 162 |
+
Method
|
| 163 |
+
23
|
| 164 |
+
5.2.3
|
| 165 |
+
Diffusion Method
|
| 166 |
+
24
|
| 167 |
+
5.2.4
|
| 168 |
+
Transforms
|
| 169 |
+
25
|
| 170 |
+
5.2.5
|
| 171 |
+
Usage
|
| 172 |
+
26
|
| 173 |
+
5.2.6
|
| 174 |
+
Evaluation
|
| 175 |
+
27
|
| 176 |
+
5.3
|
| 177 |
+
Text-conditional Diffusion
|
| 178 |
+
27
|
| 179 |
+
5.3.1
|
| 180 |
+
Motivation
|
| 181 |
+
27
|
| 182 |
+
5.3.2
|
| 183 |
+
Method
|
| 184 |
+
28
|
| 185 |
+
5.3.3
|
| 186 |
+
Evaluation
|
| 187 |
+
29
|
| 188 |
+
5.4
|
| 189 |
+
Diffusion Auto-Encoders with Latent Diffusion
|
| 190 |
+
30
|
| 191 |
+
5.4.1
|
| 192 |
+
Motivation
|
| 193 |
+
30
|
| 194 |
+
5.4.2
|
| 195 |
+
Method
|
| 196 |
+
30
|
| 197 |
+
5.4.3
|
| 198 |
+
Evaluation
|
| 199 |
+
31
|
| 200 |
+
5.5
|
| 201 |
+
Diffusion Upsampler
|
| 202 |
+
32
|
| 203 |
+
5.5.1
|
| 204 |
+
Motivation
|
| 205 |
+
32
|
| 206 |
+
5.5.2
|
| 207 |
+
Method
|
| 208 |
+
32
|
| 209 |
+
5.5.3
|
| 210 |
+
Evaluation
|
| 211 |
+
33
|
| 212 |
+
5.6
|
| 213 |
+
Diffusion Vocoder
|
| 214 |
+
34
|
| 215 |
+
5.6.1
|
| 216 |
+
Motivation
|
| 217 |
+
34
|
| 218 |
+
5.6.2
|
| 219 |
+
Method
|
| 220 |
+
34
|
| 221 |
+
5.6.3
|
| 222 |
+
Evaluation
|
| 223 |
+
35
|
| 224 |
+
5.7
|
| 225 |
+
Training Info
|
| 226 |
+
35
|
| 227 |
+
5.7.1
|
| 228 |
+
Data
|
| 229 |
+
35
|
| 230 |
+
5.7.2
|
| 231 |
+
Training
|
| 232 |
+
35
|
| 233 |
+
6
|
| 234 |
+
future work
|
| 235 |
+
37
|
| 236 |
+
7
|
| 237 |
+
conclusion
|
| 238 |
+
39
|
| 239 |
+
bibliography
|
| 240 |
+
41
|
| 241 |
+
|
| 242 |
+
1
|
| 243 |
+
I N T R O D U C T I O N
|
| 244 |
+
Music is an art of time at the intersection of fine-grained perception
|
| 245 |
+
and symbolic patter recognition. In this work, we will investigate the
|
| 246 |
+
use of diffusion model to generate music, or more broadly audio,
|
| 247 |
+
in order to gain a deeper understanding of this intersection using
|
| 248 |
+
modern deep learning diffusion models.
|
| 249 |
+
1.1
|
| 250 |
+
audio generation
|
| 251 |
+
Audio generation refers to the process of automatically synthesizing
|
| 252 |
+
novel waveforms using deep learning models. Audio generation has
|
| 253 |
+
been commonly approached in two different ways: symbolically or
|
| 254 |
+
at the waveform level. Symbolically generating audio involves creat-
|
| 255 |
+
ing a representation of the audio using symbols, such as MIDI data,
|
| 256 |
+
which can then be converted into an audio waveform. This method
|
| 257 |
+
is often easier to work with, but it can be difficult to capture all the
|
| 258 |
+
nuanced details of a sound using symbols. Waveform-based audio
|
| 259 |
+
generation, on the other hand, involves generating the raw audio
|
| 260 |
+
waveform directly. This method is more complex, due to the sheer
|
| 261 |
+
amount of values that have to be generated per second, but it allows
|
| 262 |
+
for a more precise and detailed representation of sound, that includes
|
| 263 |
+
all of its intricacies. Furthermore, audio generation can be uncondi-
|
| 264 |
+
tional or conditional. Unconditional models are trained only on audio
|
| 265 |
+
data and are able to generate new samples without any additional in-
|
| 266 |
+
put. Conditional models, on the other hand, are trained on pairs of
|
| 267 |
+
audio data and some kind of conditioning information, such as a text
|
| 268 |
+
description, genre label, lyrics, speaker id, or some other description
|
| 269 |
+
of the audio. At inference time, this conditioning information can be
|
| 270 |
+
used to guide the generation of novel audio samples that match the
|
| 271 |
+
desired characteristics. In this thesis, we will explore methods of con-
|
| 272 |
+
ditional and unconditional waveform-level generation.
|
| 273 |
+
1.2
|
| 274 |
+
challenges
|
| 275 |
+
Multiple tradeoffs have to be considered when generating audio at
|
| 276 |
+
the waveform level. To generate a single second of high quality 48kHz
|
| 277 |
+
stereo audio, 96000 values must be generated, which is comparable
|
| 278 |
+
in size to a medium resolution image. If the goal is to generate an
|
| 279 |
+
entire song (hundreds of seconds) maintaining high-quality and a rea-
|
| 280 |
+
sonable generation speed, this task becomes much more challenging.
|
| 281 |
+
A common approach to generating long audio sequences is to do so
|
| 282 |
+
1
|
| 283 |
+
|
| 284 |
+
2
|
| 285 |
+
introduction
|
| 286 |
+
in chunks, however, if the context length, or the amount of audio that
|
| 287 |
+
the model can consider at any given time is not sufficient, the result-
|
| 288 |
+
ing structure may not be consistent over multiple seconds or minutes
|
| 289 |
+
of generation. A longer context may allow for more consistent coarse
|
| 290 |
+
structure, but may also lead to lower overall quality of detail or vice-
|
| 291 |
+
versa.
|
| 292 |
+
1.3
|
| 293 |
+
existing methods
|
| 294 |
+
In this section, we will review some of the most well-known or influ-
|
| 295 |
+
ential waveform-based methods that have been developed to date.
|
| 296 |
+
One of the pioneering waveform level generation models is WaveNet
|
| 297 |
+
(2016 [8]), a fully convolutional architecture that exploits dilated con-
|
| 298 |
+
volutions with various dilation factors in order to capture a large con-
|
| 299 |
+
text. It’s able to synthesize a few seconds of both speech and classical
|
| 300 |
+
piano music at 16kHz. Jukebox (2020 [2]) uses multiple quantized au-
|
| 301 |
+
toencoders to discretize sounds at 3 different resolutions, followed by
|
| 302 |
+
a cascade of transformer upsampler models to generate the quantized
|
| 303 |
+
representations autoregressively. Jukebox is able to generate 44kHz
|
| 304 |
+
music conditioned on lyrics, artists and genres. The stack of trans-
|
| 305 |
+
formers trades-off generation speed for structure and quality. Audi-
|
| 306 |
+
oLM (2022 [1]) uses a (residual) quantized autoencoder to compress
|
| 307 |
+
the waveform into discrete tokens and a semantic encoder, later a cas-
|
| 308 |
+
cade of transformer decoders (semantic, coarse, fine) is used to gener-
|
| 309 |
+
ate 16kHz audio continuations top-down from the semantic represen-
|
| 310 |
+
tation. Musika (2022) trains a set of 1D convolutional autoencoders to
|
| 311 |
+
compress log-magnitude spectrograms, and a vocoder to reconstruct
|
| 312 |
+
both phase and magnitude from the compressed representation, us-
|
| 313 |
+
ing a 2D GAN discriminator trained on sequential chunks of audio
|
| 314 |
+
exploits this process autoregressively to generate longer sequences of
|
| 315 |
+
44kHz audio. This method has a limited context length, but is very
|
| 316 |
+
efficient given the 1D structure of convolutions. Riffusion1 (2022) fine-
|
| 317 |
+
tunes the Stable Diffusion model [12] on chunks of mel-spectrograms
|
| 318 |
+
of 5s at 44kHz, and uses style transfer to generate multiple coherent
|
| 319 |
+
concatenated images while conditioning on a textual description of
|
| 320 |
+
the song. This method has a limited 5s context length, and trades off
|
| 321 |
+
speed given the large 2D architecture, but works surprisingly well
|
| 322 |
+
considering that the original model is trained on images, not audio.
|
| 323 |
+
1.4
|
| 324 |
+
research questions
|
| 325 |
+
Diffusion models have recently demonstrated exceptional capabilities
|
| 326 |
+
in the field of image generation [11, 12], leading to an explosion of
|
| 327 |
+
incredible AI generated art 2. Iteratively removing small amounts of
|
| 328 |
+
1 https://www.riffusion.com/about
|
| 329 |
+
2 https://www.midjourney.com/showcase/
|
| 330 |
+
|
| 331 |
+
1.4 research questions
|
| 332 |
+
3
|
| 333 |
+
noise from pure noise allows diffusion models to hallucinate novel
|
| 334 |
+
samples that have common attributes to the data in the training set.
|
| 335 |
+
Compared to GANs, diffusion models in the image domain don’t
|
| 336 |
+
suffer from training instability, scale well with parameter size, and
|
| 337 |
+
have good mode coverage.
|
| 338 |
+
As long as the training data can be progressively corrupted from a
|
| 339 |
+
clean to a fully covered state, diffusion models have the potential to
|
| 340 |
+
be applied to multiple domains to generate novel samples. This opens
|
| 341 |
+
up a wide range of possibilities beyond image generation, including
|
| 342 |
+
video and audio generation.
|
| 343 |
+
In this thesis, we explore the potential of diffusion models for audio
|
| 344 |
+
generation. We will explore whether diffusion models can be used
|
| 345 |
+
on audio as effectively as with images. The aim is to generate high-
|
| 346 |
+
quality 48kHz stereo audio as efficiently as possible and to control the
|
| 347 |
+
generation in different ways, with a focus on text-conditional audio
|
| 348 |
+
generation.
|
| 349 |
+
|
| 350 |
+
4
|
| 351 |
+
introduction
|
| 352 |
+
1.5
|
| 353 |
+
contributions
|
| 354 |
+
1.5.1
|
| 355 |
+
Models
|
| 356 |
+
We introduce the following models, some of which are/will be acces-
|
| 357 |
+
sible in the archisound library:
|
| 358 |
+
• Long: a latent diffusion model for text-conditional music genera-
|
| 359 |
+
tion that is capable of generating audio with an extended con-
|
| 360 |
+
text of multiple minutes at 48kHz, targeting context length and
|
| 361 |
+
structure (∼857M parameters).
|
| 362 |
+
• Crisp: a text-conditional audio generation diffusion model with
|
| 363 |
+
a context of tens of seconds at 48kHz, targeting simplicity and
|
| 364 |
+
high-quality waveforms (∼419M parameters).
|
| 365 |
+
• Upsampler: a diffusion model to uspsample music from 3kHz to
|
| 366 |
+
48kHz (∼238M parameters).
|
| 367 |
+
• Vocoder: A diffusion model to reconstruct 48kHz waveforms
|
| 368 |
+
from 80-channel mel-spectrograms, variable input length (∼178M
|
| 369 |
+
parameters).
|
| 370 |
+
1.5.2
|
| 371 |
+
Libraries
|
| 372 |
+
Moreover, we open-source the following libraries, on which previous
|
| 373 |
+
models are based:
|
| 374 |
+
• archisound3, our library including trained models ready to use.
|
| 375 |
+
This repository doesn’t contain any modelling code, but acts as
|
| 376 |
+
a wrapper and documentation for our models hosted on Hug-
|
| 377 |
+
gingface 4.
|
| 378 |
+
• audio-diffusion-pytorch5 (ADP), the main library including
|
| 379 |
+
the proposed audio diffusion models. This library has both a-unet
|
| 380 |
+
and audio-encoders-pytorch as dependencies. At the time of
|
| 381 |
+
writing, this library has 550+ stars on GitHub, and has been
|
| 382 |
+
downloaded more than 50000 times on pip.
|
| 383 |
+
• a-unet6, a highly customizable library to build U-Net architec-
|
| 384 |
+
tures in any dimension, expansible with multiple blocks and
|
| 385 |
+
plugins. This library can be used for any type of grid data: 1D,
|
| 386 |
+
2D, 3D.
|
| 387 |
+
• audio-encoders-pytorch7 (AEP), a set of encoders and autoen-
|
| 388 |
+
coders for 1D data.
|
| 389 |
+
3 https://github.com/archinetai/archisound
|
| 390 |
+
4 https://huggingface.co/archinetai
|
| 391 |
+
5 https://github.com/archinetai/audio-diffusion-pytorch
|
| 392 |
+
6 https://github.com/archinetai/a-unet
|
| 393 |
+
7 https://github.com/archinetai/audio-encoders-pytorch
|
| 394 |
+
|
| 395 |
+
1.6 structure of the thesis
|
| 396 |
+
5
|
| 397 |
+
Some additional libraries we open-soruce that are not documented
|
| 398 |
+
in this thesis, but might nevertheless be interesting to the reader, in-
|
| 399 |
+
clude: cqt-pytorch8 for invertible CQT spectrograms using NSGT,
|
| 400 |
+
and bitcodes-pytorch9 a method for vector-quantization into binary
|
| 401 |
+
codes.
|
| 402 |
+
1.6
|
| 403 |
+
structure of the thesis
|
| 404 |
+
In Chapter 2, we present the various audio representations and pro-
|
| 405 |
+
vide a set of tradeoffs that must be considered when selecting an ap-
|
| 406 |
+
propriate representation. In Chapter 3, we describe the general prin-
|
| 407 |
+
ciples of diffusion and then delve into the specific diffusion methods
|
| 408 |
+
that we have tested. In Chapter 4, we examine our custom architec-
|
| 409 |
+
tures, including the U-Net and autoencoder, and provide detailed de-
|
| 410 |
+
scriptions of each component and how they can be easily integrated
|
| 411 |
+
into our library. In Chapter 5, we propose a range of diffusion models
|
| 412 |
+
that combine the diffusion methods from Chapter 3 with our custom
|
| 413 |
+
architecture from Chapter 4. Finally, in Chapters 6 and 7, we discuss
|
| 414 |
+
potential future work and present our conclusions.
|
| 415 |
+
8 https://github.com/archinetai/cqt-pytorch
|
| 416 |
+
9 https://github.com/archinetai/bitcodes-pytorch
|
| 417 |
+
|
| 418 |
+
|
| 419 |
+
2
|
| 420 |
+
A U D I O R E P R E S E N TAT I O N
|
| 421 |
+
In the following section, we will introduce the different types of au-
|
| 422 |
+
dio representation that we can choose from, and compare the differ-
|
| 423 |
+
ent tradeoffs. Before that, we’ll have a look at the different desirable
|
| 424 |
+
properties that should be considered.
|
| 425 |
+
2.1
|
| 426 |
+
desirable properties
|
| 427 |
+
2.1.1
|
| 428 |
+
Compressibility
|
| 429 |
+
We define compressibility as the approximate number of values per
|
| 430 |
+
second needed for high-quality audio compared to the original wave-
|
| 431 |
+
form, and how many can be easily removed without a significant loss
|
| 432 |
+
in fidelity, e.g. by applying a convolutional only autoencoder on the
|
| 433 |
+
representation.
|
| 434 |
+
2.1.1.1
|
| 435 |
+
Perceptibility
|
| 436 |
+
Perceptibility implies how close is the representation to human hear-
|
| 437 |
+
ing, this part is important since if we are compressing a representa-
|
| 438 |
+
tion that carries a lot of information we are not able to perceive in
|
| 439 |
+
the first place we will lose a lot of useful capacity. More specifically,
|
| 440 |
+
humans hear sound in the range of frequency from 20Hz to 20kHz,
|
| 441 |
+
on a logarithmic scale, which means that the frequency resolution
|
| 442 |
+
decreases as we approach 20kHz.
|
| 443 |
+
2.1.2
|
| 444 |
+
Decodability
|
| 445 |
+
Decodability refers to how simple and fast is to decode the given
|
| 446 |
+
representation back to the waveform domain that can be reproduced.
|
| 447 |
+
2.1.3
|
| 448 |
+
Diffuseability
|
| 449 |
+
Diffusability is a set of desirable properties that are important in or-
|
| 450 |
+
der for a diffusion model to be applicable. In particular, (1) the values
|
| 451 |
+
should be approximately in the range [−1, 1], (2) the signal should ide-
|
| 452 |
+
ally have some inductive biases that can be exploited by the network
|
| 453 |
+
(primarily 1D or 2D convolutional blocks), (3) time-shift invarance if
|
| 454 |
+
we are doing inpainting or autoregressive generation, i.e. the repre-
|
| 455 |
+
sentation should look the same at different time steps for the same
|
| 456 |
+
7
|
| 457 |
+
|
| 458 |
+
|
| 459 |
+
J6&' EgC COJJJU 16bI626Uf2 g 2JU9JI CJUK Ot fJ6 O11&JU9J MA6-
|
| 460 |
+
fuG UnupGI Ot csUuGje' E 1e fuG Unp6I Ot ednGucI6e gug F Je fuG
|
| 461 |
+
26Uf gnqiO JUfO g cOIb]GX-A9JnGq fGU2O1 Ot 2gb6 [C'L I] MG16 C Je
|
| 462 |
+
- t W (TT) - T-
|
| 463 |
+
roa(I : ) to f
|
| 464 |
+
dtiw blse TT Igndo lgi o q eigemi bne Is : gg
|
| 465 |
+
TIT2I.E.S
|
| 466 |
+
S'3 2LECLKOCVW2
|
| 467 |
+
IgbIgJA OA6I fIJU6' JGUC6 f6 MJII fAbIC9JJA JSA6 JS1&61 g!1616UC62
|
| 468 |
+
COU2fLICf f9U JOM 116d6UC162: H& 16d6UC162 f6Ug fO Ag1 JO16
|
| 469 |
+
FS JO22 InUCfIOU OU MSAG1OLUe' JI& 1ed6UCI6 MJJI p6 SIgGL fO 16-
|
| 470 |
+
62f coaiq61 b6icebfp!f: IU igcf It M6 gbbj g 2fgqgig Fi 1
|
| 471 |
+
126 qiL6CI gUg fgf MgA61OLI gL6 g pg2IC 6b1626SfOU MIC
|
| 472 |
+
26dnGUC62 fG qi2gqASUg62 SI6 FuSt JOu& 26dnGUC62 916 2JOM fO qlt-
|
| 473 |
+
ode lof ldeauib lie9 'fi bn bvlovi i iboogb o dt i
|
| 474 |
+
Ot 2f6160 gq1o (+8000xs): M6 p261A6g fgf Mif g 2f9ugig ID coU-
|
| 475 |
+
leix e i Exxa e gi
|
| 476 |
+
Jsi&6: e g &oog cobg2o' fG Jnp6 Ot Agj6e Ot g 2fguggg
|
| 477 |
+
Vi9 9d Iliw a9 on oibs ilsp-gid o aimabro9
|
| 478 |
+
2OL MII COUf9IU I 26COUg Ot gHqIO' It M6 MgUf fO &UG1gf6 JJfbJ6
|
| 479 |
+
It 1 = 48000 Ug M6 9I6 LGbI626UIU& ghqiO 9f 48 KHE' FUGU FJ6 f6U-
|
| 480 |
+
ghqio) gug 1 fe nupel ot bojure eeg fo iebi62euf fue gnqio cg.
|
| 481 |
+
o = l dm [a
|
| 482 |
+
S'S MVAEEOBW
|
| 483 |
+
2onUg' (+) FJ6 ASJnG2 2onJg Of gA6 fOO JUgua AgJ62 IU f6 fIU6 q!-
|
| 484 |
+
VNDIO ELKEEMIVIIOM
|
| 485 |
+
8
|
| 486 |
+
M
|
| 487 |
+
i e
|
| 488 |
+
qICf 6 bg26 OI q!L6CIA fJG MSAGULOU LO fJG Jg&UIfng6 JUIOL-
|
| 489 |
+
9Jfo&6f61 gUg fo fIgIU gU ggg!foUSJ JUog6J (c9JJ6g Aocog6L) fo b16-
|
| 490 |
+
bjoif qo of 6saa gbbja: coou bigcjc6 Je fo qiecsig fue buga6
|
| 491 |
+
Ug GUC6 fG IUCA6 pIg262 Ot 2bgFI9I JOC9JIfA fgf COUAOJIfOU 6X-
|
| 492 |
+
ol t ig liti T .
|
| 493 |
+
A61 2gI JO22 I dJf OU f6 of6 gug bg26 e A61 gig fO
|
| 494 |
+
Jg&uinq6 ebeco&ig cgu p6 6gea cobi6e26q b fo 3sx MIfy
|
| 495 |
+
Fanl6 S: Wsafqe abecoaisu gug bsee ot g euaj6 cusuel 2LEI ge-
|
| 496 |
+
(x(tr))
|
| 497 |
+
s 9rugi mi rwore 2
|
| 498 |
+
gLCfS
|
| 499 |
+
α(X(t『))
|
| 500 |
+
guq bjg26 96 q6uU6q g2 waa(x(t r)) := lx(t r)l- 9Uq bμe(x(t r)) :
|
| 501 |
+
fO 69J 9Ug JUg&JUga bgLfe' gUg J6UC6 pgCK fo MgA6uLo' Wg&uIfng6
|
| 502 |
+
IUfo JgIfn6 gU bg26' g 16bi626UfgfOU fSf c9U p6 JA66g pgK
|
| 503 |
+
.bgi9i li9
|
| 504 |
+
GCIGJA 2I& f6 S2f ONG IOL (EE) C 9J2O G
|
| 505 |
+
FJG 2J&USJ MIF I KGLUGJe Ot JGu&t M LG 2LEL CSU p6 CObrIIGq
|
| 506 |
+
lovo i d blow o
|
| 507 |
+
2IUG MAGe tOI f6 J&USIA OUG SKI& FG GSJIg&JUSI 2LEL
|
| 508 |
+
b16g61!U6g Jf612 cObo26g Ot Co2JU6 MSA62 1OL fJ6 L69J bgLf' 9Ug
|
| 509 |
+
92 M: JUfG1G2fIU&X MG C9U COU2Ig6I fJ6 2LEL 92 ID-COUAOJNFIOU MIfJ
|
| 510 |
+
MJG16 M J2 g MJUqoM tnUCfIOU Ot 2ugb6 M fugf 12 26 fo J2ojfG fuG
|
| 511 |
+
(s)
|
| 512 |
+
2l上l[x)(tr) =X(r)
|
| 513 |
+
(1)
|
| 514 |
+
X-xI2 16b1626Uf2 f6' LJ6 2LEL 9f g bo1Uf (t ) 12 q66g g2:
|
| 515 |
+
1O1 M616 F6 X-gx12 16b1626Uf2 JIU691JA 2c9J6g 116dn6UC162' gUq f6
|
| 516 |
+
S3 LECLEOCKV2
|
| 517 |
+
Q
|
| 518 |
+
IUAGL2IOU 9Ug bJg26 IGCOU2fLNICfIOU JU OUG &o:
|
| 519 |
+
l od oo o o d
|
| 520 |
+
L L6CoG1 f6 J!6J 2c9J6g Jg&UIfg6 bCfLo&ige' obIIIgOU
|
| 521 |
+
.foitofi o ol oi l il t t d lditi 'i
|
| 522 |
+
CJO1C6 JU gqiO gbbJIcgfIOU2' LG qi2gqAgUtg&6 Ot f6 J6J 2c9J6 12 fgf
|
| 523 |
+
C6bfIOU fJSU g JU69IJA 2C9J6g 2b6CfIo&1gJ' JSKIU& If 9 AG1A COJUJUOU
|
| 524 |
+
-1 iv i oiti H
|
| 525 |
+
- Isos i st rts boa o oi bvio t o baed
|
| 526 |
+
i gle rltireol Isioe eidt gle Ie gt ( + orgo d
|
| 527 |
+
E&6 3: WE-2cJG 2becftog" gfq 2c9JGg Mi roa() to AepI-
|
| 528 |
+
IHMS.E.S
|
| 529 |
+
b 9t tig lib 9i 9 bilo q
|
| 530 |
+
SUe geI&I2 1edUCIG2 OU g Jo&gIfIC 2C9J6 MIC I2 2fI Of
|
| 531 |
+
f6 Jgig fO cOUub1622 l1O f6 6g2^ fo cOb1622 bgLf2:
|
| 532 |
+
2626' COUA6LfI& fJ6 f19U21OLJ f Jg&UIfg6 gUg bg26 g126Uf9U&J62
|
| 533 |
+
21UC6 g JOf Ot 19UqOUU622 12 2fJI b1626Uf I f6 q6f9JJ2 Ot fJ6 SLig: IU g
|
| 534 |
+
IGbLG2GUTSfIOU'
|
| 535 |
+
VNDIO KELKEEMIVLIOM
|
| 536 |
+
103
|
| 537 |
+
E X I S T I N G D I F F U S I O N M E T H O D S
|
| 538 |
+
Diffusion models, first proposed in [3, 17] are most commonly imple-
|
| 539 |
+
mented with U-Net [7, 13] that is repeatedly called during inference
|
| 540 |
+
for each denoising step. Since the same network is called multiple
|
| 541 |
+
times during sampling, the weights are shared, making it a recur-
|
| 542 |
+
rent model. Since the data can be progressively corrupted from a
|
| 543 |
+
clean to a fully covered state, we can use this trick to jump to any
|
| 544 |
+
intermediate noise level and denoise a single step, backpropagating
|
| 545 |
+
only once during training. From the perspective of recurrent models,
|
| 546 |
+
(forward) diffusion allows us to recover the memory at an intermedi-
|
| 547 |
+
ate state (which we can see as the corrupted datapoint) without the
|
| 548 |
+
need to backpropagate the entire chain. This is a useful technique
|
| 549 |
+
for efficiently generating intermediate states, and has the advantage
|
| 550 |
+
that it can be highly parallelized during training. Compared to recur-
|
| 551 |
+
rent models, the memory state is predefined by the (noise) corruption
|
| 552 |
+
process and not fully learned. Diffusion exploits very similar princi-
|
| 553 |
+
ples as autoregressive transformer models [19], namely a highly par-
|
| 554 |
+
allelizeable training process and repeated network call with weight-
|
| 555 |
+
sharing during sampling. Compared to other generative models like
|
| 556 |
+
GANs, diffusion models are easier to train and don’t suffer from in-
|
| 557 |
+
stability problems arsing from having to coordinate a generator and
|
| 558 |
+
discriminator.
|
| 559 |
+
Diffusion models are a category of powerful generative models first
|
| 560 |
+
introduced in [17] (2015), and later popularized in [3] (2020), thanks
|
| 561 |
+
to the impressive results obtained in image generation on CIFAR10.
|
| 562 |
+
In this section, we will examine different diffusion methods. First,
|
| 563 |
+
the seminal DDPM [3] method, which involves training the diffu-
|
| 564 |
+
sion process with a finite number of denoising steps. Following that,
|
| 565 |
+
DDIM [18] introduces a few changes that generalize DDPM to an arbi-
|
| 566 |
+
trary number of steps. Then we will introduce V-diffusion from [16],
|
| 567 |
+
a continuous diffusion method that aims to improve the mixing of
|
| 568 |
+
the signal-to-noise ratio from DDIM. For DDPM and V-diffusion, we
|
| 569 |
+
will highlight the most important operations, namely: (1) noising the
|
| 570 |
+
original datapoint (signal) to a desired noise, (2) denoising a single
|
| 571 |
+
step with the use of our (trained) network, (3) the training objective
|
| 572 |
+
used, and (4) a sampling technique that repeatedly applies (2).
|
| 573 |
+
3.1
|
| 574 |
+
ddpm-diffusion
|
| 575 |
+
DDPM [3] is one of the seminal works in diffusion models. The method
|
| 576 |
+
starts by assuming that xxx(0)
|
| 577 |
+
0 , . . . ,xxx(D)
|
| 578 |
+
0
|
| 579 |
+
is a dataset of D i.i.d. points
|
| 580 |
+
11
|
| 581 |
+
|
| 582 |
+
|
| 583 |
+
MJGIG ef M(O' I)
|
| 584 |
+
(2)
|
| 585 |
+
Xf
|
| 586 |
+
qGAIgfOU M6 9U 6g2JJA 2gbJ6 xf f6 JO12 A6121OU Ot ×o~ g2:
|
| 587 |
+
M616 f := II=( -f) q6b6Uq2 OU 9JI f 26J6Cf6q IU d(×f I×f-):
|
| 588 |
+
×|hf :=^fx0f;=(-f)I
|
| 589 |
+
d(xf |x0) :=
|
| 590 |
+
(4)
|
| 591 |
+
:es bgtslumrof roitudirtaib
|
| 592 |
+
fG L6bgLGUGfLISSfIOU fLICK If C9U p6 2OMU fSf FJI2 J2 9J2O g JOLIU9J
|
| 593 |
+
JGaGj f Lia bioceqrL6 1e cJJeg f toMgig qilejOu bioc: eue
|
| 594 |
+
Msa o qiLCIa Inb o UO126 JGG o' (om CJ6S stboTt) fO UO1e6
|
| 595 |
+
B 2I f6 b16e g2nbOUa' M6 c q61I6 d(xf I xo)~ I'6' g
|
| 596 |
+
3'I'I Voe& (0→ f)
|
| 597 |
+
tioq uoivi t
|
| 598 |
+
C9JJ6 00AONC6 2C6NJ6' MIC COUFOI f6 IUCL6926 IU JO126 JGA6I LOU
|
| 599 |
+
b6Ug6Uf OU fJ6 b16AOn2 bo1Uf 9Ug 2OUU6 ab6tbg19J61612 ↓: : :
|
| 600 |
+
fO 29UbJ6 g JOLU9J gI2fLJpHIfIOU MIfJ fJ6 69U 9Ug COUASLISUC6 g6-
|
| 601 |
+
F6 JO126 J6A6J O1 OnIL qggboIUf ×f-1 p OU6 2f6b fO J6A6J f M6.JI A6
|
| 602 |
+
M(×f I hrf = ^I -fxf-」f = fI): IU MOLq2' I M6 M9Uf fO JUCI6926
|
| 603 |
+
: (-x I x)p edt bns (T 1o mumixem o mor Igvl 9io f
|
| 604 |
+
2gbj6g g UKoM g2Jpfou d(xo)(f6 2p2cbf Iugicf62
|
| 605 |
+
EI&M6 2: D!2JOU JU1616UC6'
|
| 606 |
+
oibuA
|
| 607 |
+
EI&n6 : D!L2IOU fIUIU&.
|
| 608 |
+
oibuA
|
| 609 |
+
IS
|
| 610 |
+
EXIIИ DIEEIOИ WEHOD3.1 ddpm-diffusion
|
| 611 |
+
13
|
| 612 |
+
3.1.2
|
| 613 |
+
Denoising (t − 1 ← t)
|
| 614 |
+
The reverse process distribution q(xxxt−1 | xxxt) is also a normal distri-
|
| 615 |
+
bution. However, it cannot be directly estimated as it depends on the
|
| 616 |
+
entire dataset. Instead, we train a neural network with parameters θ
|
| 617 |
+
as an approximation:
|
| 618 |
+
pθ(xxxt−1 | xxxt) ..= N (xxxt−1 | µµµθ(xxxt), Σθ(xxxt))
|
| 619 |
+
(6)
|
| 620 |
+
If our model is trained properly, similarly to the forward process,
|
| 621 |
+
we will be able to carry out a single denoising step by sampling the
|
| 622 |
+
normal distribution using the learned mean and variance.
|
| 623 |
+
3.1.3
|
| 624 |
+
Training Objective
|
| 625 |
+
To train our model, we need a handle on the true mean and covari-
|
| 626 |
+
ance of the reverse process q(xxxt−1 | xxxt). As we have seen before, this
|
| 627 |
+
is not directly tractable, however, if we include additional informa-
|
| 628 |
+
tion about either xxx0 (the true data point), or ǫǫǫt (the noise used to
|
| 629 |
+
get xxxt from xxx0 in the forward process) we can compute a different
|
| 630 |
+
but tractable auxiliary distribution. In the case where xxx0 is given, the
|
| 631 |
+
distribution is:
|
| 632 |
+
q(xxxt−1 | xxxt,xxx0) = N
|
| 633 |
+
�
|
| 634 |
+
xxxt−1 | ˜µµµ(xxxt,xxx0), ˜Σ(xxxt,xxx0)
|
| 635 |
+
�
|
| 636 |
+
(7)
|
| 637 |
+
With mean ˜µµµ(xxxt,xxx0) ..=
|
| 638 |
+
√1−βt(1− ¯βt−1)
|
| 639 |
+
1− ¯βt
|
| 640 |
+
xxxt +
|
| 641 |
+
√ ¯βt−1βt
|
| 642 |
+
1− ¯βt
|
| 643 |
+
xxx0 and covariance
|
| 644 |
+
˜Σ(xxxt,xxx0) ..= 1− ¯βt−1
|
| 645 |
+
1− ¯βt βtI, as shown in [3]. To train our network, we will
|
| 646 |
+
then minimize the divergence between this tractable distribution and
|
| 647 |
+
the distribution estimated with our model:
|
| 648 |
+
Lt ..= DKL [q(xxxt−1 | xxxt,xxx0) || pθ(xxxt−1 | xxxt)]
|
| 649 |
+
(8)
|
| 650 |
+
= Exxx0
|
| 651 |
+
�
|
| 652 |
+
1
|
| 653 |
+
2 ∥Σθ(xxxt)∥2
|
| 654 |
+
2
|
| 655 |
+
∥˜µµµ(xxxt,xxx0) −µµµθ(xxxt)∥2
|
| 656 |
+
2
|
| 657 |
+
�
|
| 658 |
+
(9)
|
| 659 |
+
Which amounts to a simple L2 loss between the auxiliary mean, and
|
| 660 |
+
the true mean estimated by the model, with some extra scaling factor
|
| 661 |
+
that is dependent on the covariance, in [3] the covariance is fixed to
|
| 662 |
+
Σθ(xxxt) = βtI. A more rigorous argument using variational inference
|
| 663 |
+
can be applied to show that this is a lower bound of the negative
|
| 664 |
+
log-likelihood of the data distribution. More concretely, our model
|
| 665 |
+
fθ will output an estimated mean given the noisy datapoint and the
|
| 666 |
+
noise level as input: µµµθ(xxxt) = fθ(xxxt, t), which we can then use to
|
| 667 |
+
sample the next xxxt−1 from a normal distribution.
|
| 668 |
+
If instead we assume ǫǫǫt is given, we can follow a similar procedure
|
| 669 |
+
to get the loss Lt:
|
| 670 |
+
Lt ..= DKL [q(xxxt−1 | xxxt,ǫǫǫt) || pθ(xxxt−1 | xxxt)]
|
| 671 |
+
(10)
|
| 672 |
+
= E
|
| 673 |
+
�
|
| 674 |
+
β2
|
| 675 |
+
t
|
| 676 |
+
2βt(1 − ¯βt) ∥Σθ(xxxt)∥2
|
| 677 |
+
2
|
| 678 |
+
∥ǫǫǫt − ǫǫǫθ(xxxt)∥2
|
| 679 |
+
2
|
| 680 |
+
�
|
| 681 |
+
(11)
|
| 682 |
+
|
| 683 |
+
14
|
| 684 |
+
existing diffusion methods
|
| 685 |
+
In this case our model will estimate the noise instead of the mean of
|
| 686 |
+
the datapoint xxxt, i.e. ǫǫǫθ(xxxt) = fθ(xxxt, t), however we can still recover
|
| 687 |
+
the mean as: ˜µµµ =
|
| 688 |
+
1
|
| 689 |
+
√1−βt
|
| 690 |
+
�
|
| 691 |
+
xxxt −
|
| 692 |
+
βt
|
| 693 |
+
√
|
| 694 |
+
1− ¯βtǫǫǫt
|
| 695 |
+
�
|
| 696 |
+
. Empirically, it has been
|
| 697 |
+
shown in [3] that the objective can be simplified further by ignoring
|
| 698 |
+
the scaling factor:
|
| 699 |
+
Lt = Eǫǫǫt
|
| 700 |
+
�
|
| 701 |
+
∥ǫǫǫt −ǫǫǫθ(xxxt)∥2
|
| 702 |
+
2
|
| 703 |
+
�
|
| 704 |
+
(12)
|
| 705 |
+
The final objective function to train the model is then computed with
|
| 706 |
+
random noise levels t sampled from a uniform distribution.
|
| 707 |
+
L ..= Et∼[1,T][Lt]
|
| 708 |
+
(13)
|
| 709 |
+
3.1.4
|
| 710 |
+
Sampling
|
| 711 |
+
Sampling in DDPM is very straightforward, we start with xxxT ∼ N(0, I)
|
| 712 |
+
and recursively call the model T times using at each step the esti-
|
| 713 |
+
mated means µµµθ(xxxt) (or noises ǫǫǫθ(xxxt)) of the T normal distributions
|
| 714 |
+
to get each subsequent sample: xxxT−1 ∼ pθ(· | xxxT), . . . , xxx1 ∼ pθ(· | xxx2) ,
|
| 715 |
+
xxx0 ∼ pθ(· | xxx1) where xxx0 will be our generated output data point. Note
|
| 716 |
+
that this is a stochastic sampling process, since at each step additional
|
| 717 |
+
noise is added from sampling the normal distribution.
|
| 718 |
+
3.1.5
|
| 719 |
+
Limitations
|
| 720 |
+
This method requires on the order of hundreds of sampling steps to
|
| 721 |
+
get good quality samples. Compared to more modern methods that
|
| 722 |
+
follow, the number of steps T is a fixed hyperparameter both during
|
| 723 |
+
training and sampling, limiting its flexibility.
|
| 724 |
+
3.2
|
| 725 |
+
ddim
|
| 726 |
+
DDIM [18], is another seminal work for diffusion models. By intro-
|
| 727 |
+
ducing a few changes to DDPM, the number of sampling steps used
|
| 728 |
+
during inference can be dynamically changed while maintaining the
|
| 729 |
+
same training procedure. This allows to sample between x10 and x100
|
| 730 |
+
faster, and to trade speed for quality at will. A direct implication of
|
| 731 |
+
having a variable number of steps during sampling is that we can
|
| 732 |
+
train with very large T, or even infinitely large T, leading to a contin-
|
| 733 |
+
uous diffusion process. The idea of DDIM is that if we know both xxx0
|
| 734 |
+
and xxxt, we can use q(xxxt−1 | xxxt,xxx0) to sample xxxt−1. There are two pos-
|
| 735 |
+
sibilities, either train our network to predict directly (i.e. no sampling)
|
| 736 |
+
xxx0, or train our network to predict the noise ǫǫǫt (as done in DDPM)
|
| 737 |
+
that combined with xxxt can be used to infer xxx0. A key observation is
|
| 738 |
+
that using this alternative method doesn’t change the training objec-
|
| 739 |
+
tive, as the objective only depends on the backward diffusion process.
|
| 740 |
+
|
| 741 |
+
3.3 v-diffusion
|
| 742 |
+
15
|
| 743 |
+
Importantly, we can use a different forward process to recover the
|
| 744 |
+
next step, for example use q(xxxt−2 | xxxt,xxx0) to jump directly to xxxt−2
|
| 745 |
+
instead of xxxt−1, essentially skipping a sampling step and speeding
|
| 746 |
+
up the process. If we make the time-step continuous, we can jump
|
| 747 |
+
to any intermediate step in (0, t]. Even more interestingly, this con-
|
| 748 |
+
tinuous sampling procedure can be viewed from the lens of ordinary
|
| 749 |
+
differential equations, allowing us to use a variety of existing sam-
|
| 750 |
+
plers, like the basic Euler methods or more advanced ODE samplers.
|
| 751 |
+
3.3
|
| 752 |
+
v-diffusion
|
| 753 |
+
V-diffusion, or v-objective diffusion [16], is a diffusion method in-
|
| 754 |
+
spired from DDIM, trained with a continuous value σt ∈ [0, 1]. This
|
| 755 |
+
is the method we found to work best on a variety of audio tasks. In
|
| 756 |
+
v-diffusion, if σt = 0 then xxxσt represents a data point xxx from the data
|
| 757 |
+
distribution and if σt = 1, it will be Gaussian noise ǫǫǫ. In DDIM we
|
| 758 |
+
can choose to either use the model to predict xxx0, or use it to predict
|
| 759 |
+
ǫǫǫt, in this case however, a velocity value vvvσt is estimated from which
|
| 760 |
+
both xxx0 and ǫǫǫσt can be inferred.
|
| 761 |
+
3.3.1
|
| 762 |
+
Noising (0 → σt)
|
| 763 |
+
��
|
| 764 |
+
���
|
| 765 |
+
��
|
| 766 |
+
���
|
| 767 |
+
���
|
| 768 |
+
���
|
| 769 |
+
�
|
| 770 |
+
Figure 6: V-Diffusion semicircle
|
| 771 |
+
The noising process uses a weighting on a circle:
|
| 772 |
+
xxxσt = ασtxxx0 + βσtǫǫǫ
|
| 773 |
+
(14)
|
| 774 |
+
Where ασt
|
| 775 |
+
..= cos(φt), and βσt
|
| 776 |
+
..= sin(φt), where φt ..= π
|
| 777 |
+
2 σt. When
|
| 778 |
+
σt = 0, then xxxσt = xxx0, i.e. no noise is added, if instead σt = 1,
|
| 779 |
+
then xxxσt = xxx1 = ǫǫǫ, i.e. only noise ǫǫǫ ∼ N(0, I). Intuitively, using the
|
| 780 |
+
weighting on a circle makes sure that as we move σt linearly from
|
| 781 |
+
0 to 1 the noising process slowly removes information from xxx0. By
|
| 782 |
+
sampling a random σt ∈ [0, 1], we are more likely to pick a value that
|
| 783 |
+
resembles xxx0 instead of pure noise ǫǫǫ, meaning that the model will
|
| 784 |
+
more often see data with smaller amount of noise. Empirically, this
|
| 785 |
+
has been shown to be beneficial over standard DDIM diffusion.
|
| 786 |
+
|
| 787 |
+
16
|
| 788 |
+
existing diffusion methods
|
| 789 |
+
3.3.2
|
| 790 |
+
Denoising (σt−1 ← σt)
|
| 791 |
+
To denoise a from noise level σt to noise level σt−1, we can use our
|
| 792 |
+
velocity-estimating model ˆvvvσt = fθ(xxxσt, σt), note that the velocity
|
| 793 |
+
here is defined as the derivative vvvσt
|
| 794 |
+
..= ∂xxxσt
|
| 795 |
+
σt , i.e. how much does the
|
| 796 |
+
datapoint change with a small change in the noise level σt (see circle
|
| 797 |
+
figure). As mentioned before, using an estimate of vvvt, we can obtain
|
| 798 |
+
both xxx0 and ǫǫǫt, which in turn can be used to estimate xxxσt−1 in DDIM
|
| 799 |
+
style:
|
| 800 |
+
ˆvvvσt = fθ(xxxσt, σt)
|
| 801 |
+
(15)
|
| 802 |
+
ˆxxx0 = ασtxxxσt − βσt ˆvvvσt
|
| 803 |
+
(16)
|
| 804 |
+
ˆǫǫǫσt = βσtxxxσt + ασt ˆvvvσt
|
| 805 |
+
(17)
|
| 806 |
+
ˆxxxσt−1 = ασt−1 ˆxxx0 + βσt−1 ˆǫǫǫt
|
| 807 |
+
(18)
|
| 808 |
+
In the previous equations, the first 3 lines show how to recover
|
| 809 |
+
the clean datapoint xxx0 and the noise ǫǫǫt from vvvt, and the last line,
|
| 810 |
+
remixes the noise with the initial datapoint to get xxxσt−1. The previous
|
| 811 |
+
equations can be formally obtained by using trigonometric properties
|
| 812 |
+
on the definition of velocity (as shown in the appendix of [16]), and
|
| 813 |
+
intuitively understood by rearranging vectors on the semicircle.
|
| 814 |
+
3.3.3
|
| 815 |
+
Training Objective
|
| 816 |
+
By taking the derivative of the noising formulation, we can compute
|
| 817 |
+
the true velocity vvvσt = ασtǫǫǫ − βσtxxxσt. The training objective is then:
|
| 818 |
+
L = Et∼[0,1],σt
|
| 819 |
+
�
|
| 820 |
+
∥ˆvvvσt − vvvσt∥2
|
| 821 |
+
2
|
| 822 |
+
�
|
| 823 |
+
(19)
|
| 824 |
+
= Et∼[0,1],σt
|
| 825 |
+
�
|
| 826 |
+
∥fθ(xxxσt, σt) − ασtǫǫǫ − βσtxxxσt∥2
|
| 827 |
+
2
|
| 828 |
+
�
|
| 829 |
+
(20)
|
| 830 |
+
(21)
|
| 831 |
+
Where ǫǫǫ ∼ N(0, I) and xxxσt is computed according to the noising for-
|
| 832 |
+
mulation.
|
| 833 |
+
3.3.4
|
| 834 |
+
Sampling (σ0 = 0 ← · · · ← σt−1 ← σt = 1)
|
| 835 |
+
To obtain a new data point ˆxxx0, some starting random noise ǫǫǫ ∼ N(0, I)
|
| 836 |
+
is sampled, and the denoising procedure previously demonstrated is
|
| 837 |
+
iteratively applied over a linear sigma-schedule.
|
| 838 |
+
|
| 839 |
+
|
| 840 |
+
EI&LG : -V6f PJOCK
|
| 841 |
+
COL6 CObOUGU Ot fG CIf6CL6 (JgfGg U EI&L6 ):
|
| 842 |
+
IU O1gG fo pJJg g &6UGIC n-V6f PJOcK If 12 JGC622ga fO I6UF!ta fG
|
| 843 |
+
I'-Vf Brock
|
| 844 |
+
aei d bqe
|
| 845 |
+
IU& fO 9Jf61 f6 pg2IC f6UbJS6 2 fgf 6xb6LIUU6UfgfIOU gUg If61gfIOJ
|
| 846 |
+
Agiefa ot n-efe: Lue Bog Ot g-uef Je fo bioe fue af Jeaej Ot
|
| 847 |
+
bliud ot old gibliud gldsosd o xodloot 2gbulori sdt isrd
|
| 848 |
+
-il en- bivoq w eoiev gib iw Isb ot ab l
|
| 849 |
+
t o od t g t ew Is [ l oold oit
|
| 850 |
+
JUGU2' JUCJq!U&: UGM 2KI COUUGCIOU2' COUAOJFIOU9J PJOCK2' 9ffGU-
|
| 851 |
+
GLU AGL2IOU2 fSf IUCOIbOLgf6 UMIUGLON2 GUJSUC6UGUf2 9Uq IIbLOA6-
|
| 852 |
+
-bom rit it v bvlov ido -U T
|
| 853 |
+
b bs oe o id ibi
|
| 854 |
+
JIg&6e' f IU OnI C26 M6 MJJI gggbf If 1O ID COUAOJOU2 U O1q61
|
| 855 |
+
gICIf6CfI6 26g SD COUAOfIOU2 fO 6xbJOIf fG 2bgfIg 2LICfL6 O
|
| 856 |
+
FO JG9LU g bI62GLAG !UG qGF9IJ2 gf JJFbJG LG2OJIfIOU2' LG OLI&IUgJ
|
| 857 |
+
J6UrgfIOU' -V6fe cOU21ef Ot 9U 6UCoq61 U6fMOLK gUq g q6Coq1 J6f-
|
| 858 |
+
fAb6 O1 COUAOJIFIOU9J 9ICIf6CfL6 OLI&IU9JJA g6A6JOb6g 1OL Ug86 28-
|
| 859 |
+
[l V-U fiw bgmlqi lnommo lbom oai
|
| 860 |
+
4'I'1Back&tong ot -Vsf
|
| 861 |
+
I O g-U IBV
|
| 862 |
+
VBCHILECLNBE218
|
| 863 |
+
architectures
|
| 864 |
+
These include a downsampling block that simultaneously reduces
|
| 865 |
+
the resolution and number of channels of the input (typically imple-
|
| 866 |
+
mented with a single convolution), a stack of customizable processing
|
| 867 |
+
items (see subsection 4.1.3 for details), an inner block that may contain
|
| 868 |
+
another instance of the block recursively, a second stack of processing
|
| 869 |
+
items that typically mirrors the first stack, an upsampling block that re-
|
| 870 |
+
verses the effects of the downsampling (typically implemented with a
|
| 871 |
+
single transposed convolution), and a skip block that merges the skip
|
| 872 |
+
connection using some operation.
|
| 873 |
+
Furthermore, we select 3 possible types of conditioning contexts
|
| 874 |
+
that can be injected in the processing items, namely: a feature-vector
|
| 875 |
+
based conditioning
|
| 876 |
+
typically used with diffusion to provide the
|
| 877 |
+
noise level, an embedding based conditioning
|
| 878 |
+
that injects multiple
|
| 879 |
+
embedding vectors as context, typically used for text/CLIP-embedding
|
| 880 |
+
based conditioning, and lastly a channel-based conditioning
|
| 881 |
+
used
|
| 882 |
+
to inject entire stacks of channels in the block. Depending on the task,
|
| 883 |
+
we might a different combination of conditioning methods.
|
| 884 |
+
All described characteristics can be defined and customized using
|
| 885 |
+
the following block:
|
| 886 |
+
from a_unet.apex import Block
|
| 887 |
+
block = Block(
|
| 888 |
+
dim=1,
|
| 889 |
+
in_channels=2,
|
| 890 |
+
channels=4,
|
| 891 |
+
factor=2,
|
| 892 |
+
items=[...],
|
| 893 |
+
# Optional
|
| 894 |
+
items_up=[...],
|
| 895 |
+
downsample_t=...,
|
| 896 |
+
upsample_t=...,
|
| 897 |
+
skipt_t=...,
|
| 898 |
+
inner_block=...
|
| 899 |
+
)
|
| 900 |
+
This is a building block for a U-Net, where we can customize the num-
|
| 901 |
+
ber of input/ouput channels (in_channels), the number of channels
|
| 902 |
+
post-downsampling, and the downsampling factor. The items list
|
| 903 |
+
will contain the different items that will be duplicated after the inner
|
| 904 |
+
block. Optionally, we can change the type of skip connection, down-
|
| 905 |
+
sampling and upsampling operations. The inner_block can be an-
|
| 906 |
+
other instance of Block to recursively nest multiple blocks.
|
| 907 |
+
Since a U-Net is usually composed of multiple nested blocks where
|
| 908 |
+
the number of in_channles of the inner block must match the num-
|
| 909 |
+
ber of channels of the outer block, we provide XUnet as a glue class,
|
| 910 |
+
and XBlock as a template class for Block to make this process more
|
| 911 |
+
convenient and automated:
|
| 912 |
+
from a_unet.apex import XUNet, XBlock
|
| 913 |
+
unet = XUNet(
|
| 914 |
+
|
| 915 |
+
|
| 916 |
+
procK = XBrocK('
|
| 917 |
+
( +[ + []=
|
| 918 |
+
[ . A .M] = 6
|
| 919 |
+
|
| 920 |
+
L66qLolmglqIfw 2 "
|
| 921 |
+
CL022Vf+6U10UIf6W 92 C"
|
| 922 |
+
|
| 923 |
+
|
| 924 |
+
XBTOC
|
| 925 |
+
) oq q.n_ mo
|
| 926 |
+
:wolof oJax ooja b lia d
|
| 927 |
+
LJG 6XSJUbJ6 COUUPIUSFIOU ILOUU EI&NIL6 8' OI 9UA OFUGI COUUPIUSFIOU'
|
| 928 |
+
eti t-U :8 gi
|
| 929 |
+
.O
|
| 930 |
+
Ot 6&1OU A6CfOL2' gU g IJ6cfIGW (I) tO1 U)6CfI& g 26f Ot 1OAIg6g
|
| 931 |
+
Gp6qq!& AGcfO12 g 66qLoLMg qIGw () 1O1 WIL JIk6 b1Oc622I
|
| 932 |
+
(C) tOL CIO2e-SfGUOU pef6GU I6&JOU AGCfOLe gg g bLOAIgeg 2f O1
|
| 933 |
+
26Jt-ff6UFO LOC622I& IIf 6fM66U I681OU A6CO12" g C L22fGU+TOIfG
|
| 934 |
+
1G1GUf CSUU6Je IOAI6g 1G9fIL6 A6CO1 g VfFGufTOuIfGW (v) tOL
|
| 935 |
+
CG22u& UIf' g WoqnrgfiouIfew (w) fo gbbja Jogngfiou Ot fG g!t
|
| 936 |
+
- Ioilovo () m bivo 9w xd t o
|
| 937 |
+
3Ife
|
| 938 |
+
.lb gbivo ot oJax
|
| 939 |
+
1OLMgiq6g fo 9I PJOck2' Lgig6f612 c9U 9J2o p6 b1OAig6q fo g 2b6cIC
|
| 940 |
+
2KIb COUUGCFIOU 2KTb-f IU fG XnM6f' fUSf MJII IU fHU gnfOUSIC9JA
|
| 941 |
+
2Kb-f=
|
| 942 |
+
1
|
| 943 |
+
XBrock(cuguu6r2=je' 9cfol=5' 1f6w2=[-.-1)*
|
| 944 |
+
([...]= = 8=J)
|
| 945 |
+
([...=2 .= .=)
|
| 946 |
+
p『oc2=[
|
| 947 |
+
Tu-cge2=*
|
| 948 |
+
.I=mib
|
| 949 |
+
YAI - I.20
|
| 950 |
+
architectures
|
| 951 |
+
Additional customized items can be easily included without alter-
|
| 952 |
+
ing the template code, making experimentation very simple.
|
| 953 |
+
4.1.4
|
| 954 |
+
Plugins
|
| 955 |
+
Plugins are used to augment the U-Net model with additional func-
|
| 956 |
+
tionality. It’s often the case that we have to wrap the U-Net model
|
| 957 |
+
with some pre- and post-transformation, or that we have to alter or
|
| 958 |
+
augment the inputs provided to the U-Net. In order to maintain a
|
| 959 |
+
modular structure, plugins can be used to directly modify the U-Net
|
| 960 |
+
type without having to change the model code.
|
| 961 |
+
4.1.4.1
|
| 962 |
+
Time Conditioning Plugin
|
| 963 |
+
The time conditioning plugin is used to convert a floating point value
|
| 964 |
+
to a conditioning feature vector
|
| 965 |
+
, this is useful during diffusion to
|
| 966 |
+
provide the current noise level, or timestep. To obtain the time feature
|
| 967 |
+
vector from a floating point value, a learned weight is multiplied by
|
| 968 |
+
the time information to get a frequency vector that is then processed
|
| 969 |
+
using a pair of sin and cos to get Fourier features. The Fourier features
|
| 970 |
+
are then transformed to a learned feature vector of the desired size by
|
| 971 |
+
a stack of MLPs. This function can be easily added to the base U-Net
|
| 972 |
+
as:
|
| 973 |
+
UNetWithTime = TimeConditioningPlugin(UNet)
|
| 974 |
+
This extends the U-Net with an additional time parameter, which can
|
| 975 |
+
be one or more floating point values of each batch element.
|
| 976 |
+
4.1.4.2
|
| 977 |
+
Embedding Classifier Free Guidance Plugin
|
| 978 |
+
Classifier free guidance is a method proposed in [4]. We provide
|
| 979 |
+
a ClassifierFreeGuidancePlugin used to increase the conditioning
|
| 980 |
+
strength of the provided embedding
|
| 981 |
+
. During training, the embed-
|
| 982 |
+
ding is masked with a fixed (learned) embedding with a small prob-
|
| 983 |
+
ability in order to ensure that the network is able to generate realistic
|
| 984 |
+
output without access to any conditioning information. During infer-
|
| 985 |
+
ence, the network is called twice, once with the conditioning embed-
|
| 986 |
+
ding to get ˆye, and once with the fixed embedding used as mask to
|
| 987 |
+
get ˆym. A scaling factor embedding_scale (λ) is then used to guide
|
| 988 |
+
the network to produce an output that gives more or less importance
|
| 989 |
+
to the conditioning embedding compared to the masked embedding:
|
| 990 |
+
ˆy = ˆym + (ˆym − ˆye) · λ
|
| 991 |
+
(22)
|
| 992 |
+
This plugin can be easily used by augmenting the U-Net as:
|
| 993 |
+
UNetCFG = ClassifierFreeGuidancePlugin(
|
| 994 |
+
net_t=UNet,
|
| 995 |
+
embedding_max_length=64
|
| 996 |
+
)
|
| 997 |
+
|
| 998 |
+
4.2 our audio-encoders-pytorch library
|
| 999 |
+
21
|
| 1000 |
+
Later the new UNetCFG model can be called with the additional param-
|
| 1001 |
+
eter embedding_mask_proba to probabilistically mask a batch of em-
|
| 1002 |
+
beddings during training (e.g. a value of 0.1 will mask 10% of the em-
|
| 1003 |
+
beddings with a fixed embedding of length embedding_max_length),
|
| 1004 |
+
or with an embedding_scale parameter during inference, to call the
|
| 1005 |
+
U-Net twice with and without masking, and apply the scaling factor.
|
| 1006 |
+
In both cases, the embedding parameter must be provided as well.
|
| 1007 |
+
4.1.4.3
|
| 1008 |
+
Text Conditioning
|
| 1009 |
+
The text conditioning plugin augments the U-Net embedding condi-
|
| 1010 |
+
tioning information
|
| 1011 |
+
with a learned text embedding from a frozen
|
| 1012 |
+
pretrained language model. By default, the T5-base transformer model
|
| 1013 |
+
from [10] is used if no embedder is provided.
|
| 1014 |
+
UNetWithText = TextConditioningPlugin(
|
| 1015 |
+
net_t=UNet,
|
| 1016 |
+
embedder=T5Embedder()
|
| 1017 |
+
)
|
| 1018 |
+
This adds an additional text field to the U-Net forward method that
|
| 1019 |
+
automatically extends the embedding with text embeddings from a
|
| 1020 |
+
pretrained language model.
|
| 1021 |
+
4.2
|
| 1022 |
+
our audio-encoders-pytorch library
|
| 1023 |
+
The autoencoder component has a similar structure to the U-Net,
|
| 1024 |
+
with a few changes: (1) to make it an autoencoder no skip connections
|
| 1025 |
+
will be used, (2) no attention blocks will be used to make it generic to
|
| 1026 |
+
any input sequence length (3) no conditioning blocks will be applied.
|
| 1027 |
+
We open-source the autoencoder library audio-encoders-pytorch (AEP)
|
| 1028 |
+
as a separate library from a-unet. AEP includes both encoders and
|
| 1029 |
+
decoders, and a set of bottlenecks that can be used to normalize
|
| 1030 |
+
the latent space, namely (1) a variational bottleneck in the style of
|
| 1031 |
+
VAEs [5], (2) a simple tanh bottleneck, (3) a quantizer bottleneck, sim-
|
| 1032 |
+
ilar to the one proposed by VQ-VAEs [9]. Furthermore, we propose
|
| 1033 |
+
two encoders that encode spectrograms channelwise into a 1D latent,
|
| 1034 |
+
namely a ME1d (magnitude spectrogram only encoder), or MelE1d
|
| 1035 |
+
(mel spectrogram encoder), both compatible with the different bottle-
|
| 1036 |
+
necks.
|
| 1037 |
+
|
| 1038 |
+
|
| 1039 |
+
5
|
| 1040 |
+
M O D E L S
|
| 1041 |
+
5.1
|
| 1042 |
+
overview
|
| 1043 |
+
In this section we describe various diffusion models and their under-
|
| 1044 |
+
lying structures. We investigate various diffusion models that serve
|
| 1045 |
+
different purposes and functions, including upsampling and autoen-
|
| 1046 |
+
coding. Although these models may have distinct applications, they
|
| 1047 |
+
are ultimately utilized with the goal of audio generation. All of the
|
| 1048 |
+
different models are implemented using variations and combinations
|
| 1049 |
+
of the previously described achitectures (i.e. U-Nets and auto-encoders).
|
| 1050 |
+
The models proposed are implemented in the audio-diffusion-pytorch
|
| 1051 |
+
(ADP) library.
|
| 1052 |
+
5.2
|
| 1053 |
+
diffusion unconditional generator
|
| 1054 |
+
The diffusion generator is the simplest model we propose to syn-
|
| 1055 |
+
thetize unconditional audio and is implemented with a single 1D
|
| 1056 |
+
U-Net.
|
| 1057 |
+
5.2.1
|
| 1058 |
+
Motivation
|
| 1059 |
+
The unconditional diffusion model is a good starting point to test the
|
| 1060 |
+
overall quality of the particular architecture and diffusion method
|
| 1061 |
+
used. It doesn’t include any type of conditioning, making the dataset
|
| 1062 |
+
and training procedure very simple, and at the same time can give a
|
| 1063 |
+
good idea of the generation quality.
|
| 1064 |
+
5.2.2
|
| 1065 |
+
Method
|
| 1066 |
+
The diffusion generator takes a raw high-quality stereo audio source
|
| 1067 |
+
as input from the datasets, that is then corrupted to a random noise
|
| 1068 |
+
level based on the chosen diffusion method. Using a U-Net, the gen-
|
| 1069 |
+
erator then predicts the output, which may be the denoised input or
|
| 1070 |
+
a value that is used to compute the denoised input, depending on the
|
| 1071 |
+
type of diffusion method employed. The noise level (usually called
|
| 1072 |
+
time or σ) is provided as conditioning to the network thorugh as an
|
| 1073 |
+
encoded feature vector
|
| 1074 |
+
to tell the network how much noise must
|
| 1075 |
+
be removed from the provided input. For the diffusion generator nei-
|
| 1076 |
+
ther the embedding conditioning
|
| 1077 |
+
nor the cross attention blocks are
|
| 1078 |
+
used.
|
| 1079 |
+
23
|
| 1080 |
+
|
| 1081 |
+
|
| 1082 |
+
2bJCif 2b66g' gUg 2gbj6 dgJf^
|
| 1083 |
+
2oupj gef fo 2gbj6 l1ou' Mif gionug 2o 2gbj!u& 2f6be biognc62
|
| 1084 |
+
rt d of oiaib-v bof W Igvl ebuol i iev tt le
|
| 1085 |
+
6 Jong2-o1J6g' M6 &t &og 6f2 & gog 1ox
|
| 1086 |
+
Ji&y gaUSIC-Lgu&e' 6AGU MIf ggAgUC6 2gbJIu& Gfoge: It biob
|
| 1087 |
+
oe e a-df g
|
| 1088 |
+
21b2 Mf g21C 2JbJ61 qnL& IU1616UC6 fo &6U61f6 16920U9pJ6
|
| 1089 |
+
qitteioU efoqe: Onf ot f6 pox' f6 og6J goueftgf6g &oog I6-
|
| 1090 |
+
Ms Gagjrgteg fue beltouguc ot fue bioboeeg Joqel Mify q!leieUf
|
| 1091 |
+
E!&G IO: D!!21OU JUOq6I IUIGIGUC6
|
| 1092 |
+
WO126
|
| 1093 |
+
.fioitudirtib stsb gt
|
| 1094 |
+
MIf AgJ& O126 J6A6J2 fO &6U61g16 g U6M bJg2JPJ6 29bJ6 1OU
|
| 1095 |
+
bglovni vlevitsigti ai t9V-U grlt brs bglqmse i glqmse oibus gri
|
| 1096 |
+
DLJ& IU1616UC6' g 9UqOUU A6CfO1 MIfJ fJ6 29JU6 2Jgb6 g2 9 fL9IU-
|
| 1097 |
+
EI&n6 O: D!2OU JqGI fIgUI&
|
| 1098 |
+
oibuA
|
| 1099 |
+
54
|
| 1100 |
+
ИODE25.2 diffusion unconditional generator
|
| 1101 |
+
25
|
| 1102 |
+
5.2.4
|
| 1103 |
+
Transforms
|
| 1104 |
+
Independently of the diffusion method used, this model without any
|
| 1105 |
+
addition struggles to generate more than a few second of sound. If
|
| 1106 |
+
the raw waveform is provided to the network the initial convolutional
|
| 1107 |
+
blocks of the U-Net will have to process huge samples, e.g. even a
|
| 1108 |
+
single second of high-quality 48kHz audio requires 48000 values to be
|
| 1109 |
+
processed by the first convolutional block. This can be a speed issue
|
| 1110 |
+
if the audio is not downsampled quickly enough in the U-Net, as the
|
| 1111 |
+
inefficiency will compound over the number of sampling steps of the
|
| 1112 |
+
diffusion process. In addition to that, if attention blocks are used, we
|
| 1113 |
+
will have to downsample enough to make sure that the number of
|
| 1114 |
+
timesteps to be in the range of 1024 or 2048 values. Exceeding that
|
| 1115 |
+
will slow down self-attention drastically due to the n2 computational
|
| 1116 |
+
complexity for sequence length n. Hence, a lot of downsampling is
|
| 1117 |
+
required with long audio samples if we want to satisfy these criteria.
|
| 1118 |
+
To mitigate the challenges mentioned earlier, we investigate the use
|
| 1119 |
+
of various methods and audio transforms to convert the raw audio
|
| 1120 |
+
source into a representation that reduces the temporal dimension in
|
| 1121 |
+
exchange for additional channels.
|
| 1122 |
+
5.2.4.1
|
| 1123 |
+
Patching
|
| 1124 |
+
The first transform is patching, proposed originally for the image do-
|
| 1125 |
+
main in [6]. We adapt patching to the 1D domain, where the idea is
|
| 1126 |
+
to group sequential time steps into chunks, that will then be trans-
|
| 1127 |
+
posed to channels. Given a patch size p, the length t is reduced by
|
| 1128 |
+
t
|
| 1129 |
+
p where the number of channels increases to c · p, at the end of the
|
| 1130 |
+
U-Net processing the channels are unchunked back to the full length.
|
| 1131 |
+
We found patching to give drastic speedups, almost at a factor of p
|
| 1132 |
+
for p = 2, 4, 8, 16, 32, ..., allowing to train models with much longer
|
| 1133 |
+
audio sources. However, even if the audio generation quality almost
|
| 1134 |
+
matches the non-patched version, audible aliasing is present with all
|
| 1135 |
+
factors. This drawback is likely due to the repeated unchunking pro-
|
| 1136 |
+
cess, which will have a repeating structure, creating a high-frequency
|
| 1137 |
+
sine wave in the signal. Furthermore, we found that patching with
|
| 1138 |
+
p ⩾ 64 started to degrade quality, probably due to some capacity
|
| 1139 |
+
constraint in the channel dimension. We can think of patching as a
|
| 1140 |
+
deterministic auto-encoding process, with a downsampling factor of
|
| 1141 |
+
p.
|
| 1142 |
+
5.2.4.2
|
| 1143 |
+
STFT
|
| 1144 |
+
The second transform is the previously introduced STFT. We use the
|
| 1145 |
+
common setting of 1024 num fft and window length with 256 hop
|
| 1146 |
+
size. By wrapping the U-Net with STFT and iSTFT the transform
|
| 1147 |
+
downsamples the length of the audio by 1024 while equally increas-
|
| 1148 |
+
|
| 1149 |
+
26
|
| 1150 |
+
models
|
| 1151 |
+
ing the channel count. STFT is implemented with the Fast-Fourier
|
| 1152 |
+
Transform, hence it’s efficient to apply. No normalization is required
|
| 1153 |
+
on the spectrogram, since the diffusion loss will still be applied on the
|
| 1154 |
+
reconstructed wave. This method gives great speedups thanks to the
|
| 1155 |
+
large downsampling, but similarly to patching suffers from degrada-
|
| 1156 |
+
tion in quality compared to the raw wave representation. Perceptible
|
| 1157 |
+
noise is present in the generations both when transforming to magni-
|
| 1158 |
+
tude+phase, or when using real+complex.
|
| 1159 |
+
5.2.4.3
|
| 1160 |
+
Learned Transform
|
| 1161 |
+
Lastly, we propose a learned transformation with a single convolu-
|
| 1162 |
+
tional and transposed-convolutional block at the start and respec-
|
| 1163 |
+
tively end of the U-Net. The transform consists in using a large kernel
|
| 1164 |
+
size and stride of 64. This will down-sample the original signal in a
|
| 1165 |
+
single step, trading off small amounts of speed from the determinis-
|
| 1166 |
+
tic patching or FFT implemented STFT. However, since it’s a convo-
|
| 1167 |
+
lutional method, we can choose the number of channels and increase
|
| 1168 |
+
it to a larger value (e.g. 128, double the kernel size and stride) than
|
| 1169 |
+
used during patching, giving more capacity to be resilient to artifacts.
|
| 1170 |
+
At the same time, we can use ideas from STFT and have large over-
|
| 1171 |
+
lapping windows with learned kernels instead of fixed sine/cosine
|
| 1172 |
+
waves (e.g. kernel size 128, stride 64, 64 channels, with padding to
|
| 1173 |
+
preserve dimension), which can help to overcome aliasing. We found
|
| 1174 |
+
this to be the best quality/speed tradeoff method of pre-transforming
|
| 1175 |
+
audio.
|
| 1176 |
+
5.2.5
|
| 1177 |
+
Usage
|
| 1178 |
+
The diffusion generation model proposed is constructed by first adding
|
| 1179 |
+
the LTPlugin to the default U-Net UNetV0. This plugin wraps the U-
|
| 1180 |
+
Net with the previously described learned transform. After that, we
|
| 1181 |
+
have to provide the U-Net type to the DiffusionModel class which is
|
| 1182 |
+
responsible for constructing the U-Net, the diffusion training method
|
| 1183 |
+
(by default V-Diffusion), and the diffusion sampler (by default DDIM).
|
| 1184 |
+
from audio_diffusion_pytorch import DiffusionModel, UNetV0,
|
| 1185 |
+
LTPlugin, VDiffusion, VSampler
|
| 1186 |
+
UNet = LTPlugin(
|
| 1187 |
+
UNetV0, num_filters=128, window_length=64, stride=64
|
| 1188 |
+
)
|
| 1189 |
+
model = DiffusionModel(
|
| 1190 |
+
net_t=UNet,
|
| 1191 |
+
in_channels=channels,
|
| 1192 |
+
channels=[256, 256, 512, 512, 1024, 1024],
|
| 1193 |
+
factors=[1, 2, 2, 2, 2, 2],
|
| 1194 |
+
items=[2, 2, 2, 2, 4, 4],
|
| 1195 |
+
|
| 1196 |
+
5.3 text-conditional diffusion
|
| 1197 |
+
27
|
| 1198 |
+
attentions=[0, 0, 0, 0, 1, 1],
|
| 1199 |
+
attention_features=64,
|
| 1200 |
+
attention_heads=12,
|
| 1201 |
+
diffusion_t=VDiffusion,
|
| 1202 |
+
sampler_t=VSampler
|
| 1203 |
+
)
|
| 1204 |
+
This model can be easily used to get the diffusion loss for train-
|
| 1205 |
+
ing (which automatically applies the entire diffusion process) or to
|
| 1206 |
+
sample a new element provided the starting noise.
|
| 1207 |
+
# Training
|
| 1208 |
+
x = torch.randn(1, 2, 2**21) # [batch, channels, length]
|
| 1209 |
+
loss = model(x)
|
| 1210 |
+
# Sampling
|
| 1211 |
+
noise = torch.randn(1, 2, 2**21)
|
| 1212 |
+
sample = model.sample(noise=x, num_steps=50)
|
| 1213 |
+
5.2.6
|
| 1214 |
+
Evaluation
|
| 1215 |
+
We found that it’s important for quality to have a single non-downsampled
|
| 1216 |
+
block at the start to process the transformed audio at full resolution.
|
| 1217 |
+
Furthermore, attention blocks are crucial for temporal consistency of
|
| 1218 |
+
the generated audio, but can only be applied after the original wave-
|
| 1219 |
+
form is down sampled to around 1024-2048 length. For example, if
|
| 1220 |
+
the original audio has length 219 (i.e. ∼11s at 48kHz), we downsam-
|
| 1221 |
+
ple by 64 = 26 in the learned transform, and by 23 in the 4 blocks
|
| 1222 |
+
before the first attention block, hence the context length of the first
|
| 1223 |
+
attention blocks will be in the desired range of 210 = 1024.
|
| 1224 |
+
This model can generate high quality audio over tens of seconds,
|
| 1225 |
+
possibly more depending on the speed requirements. In general, a
|
| 1226 |
+
larger set of initial convolutional/resnet blocks (closer to the wave-
|
| 1227 |
+
form) will result in better audio quality, at the cost of generation
|
| 1228 |
+
speed.
|
| 1229 |
+
We found that the architecture is able to generalize to longer sam-
|
| 1230 |
+
ples than it was trained on, if attention blocks are used. The samples
|
| 1231 |
+
maintain good long-context awareness even when doubling or more
|
| 1232 |
+
the training length. Note that this increases the attention context size
|
| 1233 |
+
and hence needs to be considered for before training.
|
| 1234 |
+
5.3
|
| 1235 |
+
text-conditional diffusion
|
| 1236 |
+
5.3.1
|
| 1237 |
+
Motivation
|
| 1238 |
+
We used text as a mean of conditioning for several reasons. In Imagen
|
| 1239 |
+
[15] it has been shown that pretrained and frozen language models
|
| 1240 |
+
can be successfully applied to condition the diffusion process to gen-
|
| 1241 |
+
erate images matching a textual description, and that by increasing
|
| 1242 |
+
|
| 1243 |
+
|
| 1244 |
+
Clo22-9ff6uf1ou2=[1' {' {' {' {'{]
|
| 1245 |
+
o = (
|
| 1246 |
+
:uds Isoitibbs giwollo
|
| 1247 |
+
fOUIu& MIfJ L2 gUg CEC c9U p6 692JJA 9qq6q fo f6 Joq6I MIf fJ6
|
| 1248 |
+
-ibo x9T grribb9dm9 b9r169l b9xil s 1o 1ovs mi ,29mrit 9dt o oo1
|
| 1249 |
+
1166 niggUc6 [+]: DLIu& fLgUI' f6 f6xf Gp6qgu& 12 qiobb6g
|
| 1250 |
+
IO JUCLG926 f6 2LG&ty Ot f6 fGxf COUqfOUJu& M6 gbA cJs22IGL-
|
| 1251 |
+
DibuA
|
| 1252 |
+
nibbedma
|
| 1253 |
+
Ewpeqqtua
|
| 1254 |
+
x9T
|
| 1255 |
+
|
| 1256 |
+
JI2f MIfJ 2bgC62 9Uq fJ6 OfJ61 2o0 Ot fJ6 fIUU62 M6 26 COJUIg2 fO
|
| 1257 |
+
t it .1.0 ilidd
|
| 1258 |
+
1opn2f' M6 2nt6 f6 J!2f ot J6fggfg gug qiob 6gc 6j66Uf Mif g
|
| 1259 |
+
JUOUJA 2FSLf (I ot ) OL GUg (V ot ): LO JSKG FJG COUqIFIOUJU& JUO16
|
| 1260 |
+
fLgIUGg OU gUg Jom JSU fofSJ CJHUK2 fJG 2ou& 12 JSgG Ot (6'&: I ot +)
|
| 1261 |
+
d w
|
| 1262 |
+
2JC &GUGISfIOU' MG fISJU OU JGTSgSg' JUCJg!U& FJG fIfJG Ot fUG 2OU&
|
| 1263 |
+
-m ron .Igbom oiauib gt noitibro ot bgau zi oidw gribbgdm9
|
| 1264 |
+
L2 fL9U21OLJ6 6UCOq61 f 6UCO6 f6 f6Xf9J L6b1626UtSFIOU JfO
|
| 1265 |
+
bii be o a w[l iwollo V-U t o oil
|
| 1266 |
+
boNtgMS..
|
| 1267 |
+
J9KIU& fJ6 IU6L19C6 JOL6 &6U61IC Ug 692A fO I26'
|
| 1268 |
+
(---) i o
|
| 1269 |
+
o
|
| 1270 |
+
UgfCJIu&: LJI2 JIUf2 fO f6 1gCf fgf g 2JIUIJg1 J6fJog J1&f 9J2O MOLK
|
| 1271 |
+
Fu6 2s6 Ot fG J9u&ng&6 Joql MI 162nJf JU gU JubiO6g 6xf-Ig&6
|
| 1272 |
+
58
|
| 1273 |
+
WODEF2
|
| 1274 |
+
F-
|
| 1275 |
+
s io qlgd gdt diw bvloa llsuau i tsdt 2TT i mldorq ommo
|
| 1276 |
+
i idT .igbo o s t ld o i d ilep ois b
|
| 1277 |
+
tiw 2brow wgf s 9ldmumr ot 9lds 2i Igbom 9dt tsdt bruo ud .(2TT)
|
| 1278 |
+
86U61JC MO1q2 fgf 916 1onUq JU fIfJ62: M6 9J20 fLJ6g f6xf-fO-2b66CJ
|
| 1279 |
+
f6Xfn9J q62CLbIOU 2b6CI9JI 2Ju& FJ6 &6I6 Ot f6 2Ou& OL JUO16
|
| 1280 |
+
gd iw oibus otsm ot Iow iow ot rinoitibron xg briuof
|
| 1281 |
+
2.3.3Eofom
|
| 1282 |
+
.0.=J62-pnibb9dm9
|
| 1283 |
+
Unw-2f6b2=20"
|
| 1284 |
+
[x9 m" ]=x
|
| 1285 |
+
Uo126'
|
| 1286 |
+
2gwbr6 = woq6'29wbf6(
|
| 1287 |
+
# 29wbua
|
| 1288 |
+
ro22 = woq6(x* f6xf=[a f6xf、]* 6wpeqqiua-wg2-blopg=0'1)
|
| 1289 |
+
# ga
|
| 1290 |
+
wpeqq-wg-a=
|
| 1291 |
+
.9=p_pnbb9dm_92
|
| 1292 |
+
=_
|
| 1293 |
+
i o Ix :
|
| 1294 |
+
nust
|
| 1295 |
+
WOT26
|
| 1296 |
+
23 IEX-COMDIIIOMV DIEE2IOM
|
| 1297 |
+
sd30
|
| 1298 |
+
models
|
| 1299 |
+
5.4
|
| 1300 |
+
diffusion auto-encoders with latent diffusion
|
| 1301 |
+
5.4.1
|
| 1302 |
+
Motivation
|
| 1303 |
+
Patching, STFT, and learned transforms can be used to reduce the in-
|
| 1304 |
+
put length during the diffusion process. Those approaches are advan-
|
| 1305 |
+
tageous if we want to train a single model end-to-end, however, this is
|
| 1306 |
+
suboptimal since the waveform is expanded to its original full-length
|
| 1307 |
+
shape multiple times during sampling, slowing down the process.
|
| 1308 |
+
A more appropriate way would be to first encode the waveform,
|
| 1309 |
+
then do the diffusion loop in the compressed representation, never
|
| 1310 |
+
expanding it to the full waveform until the end of the loop. This
|
| 1311 |
+
is the idea proposed in [12] (latent diffusion), where a variational
|
| 1312 |
+
autoencoder is first used to compress images by a few factors to a
|
| 1313 |
+
smaller latent space, and later diffusion is applied to that latent. By
|
| 1314 |
+
compressing the audio before applying diffusion, we can drastically
|
| 1315 |
+
speed up the diffusion sampling procedure, making an important
|
| 1316 |
+
case for an efficient and good quality autoencoder.
|
| 1317 |
+
5.4.2
|
| 1318 |
+
Method
|
| 1319 |
+
There are different ways to implement the autoencoder, however an
|
| 1320 |
+
important property is that we must be able to apply the diffusion pro-
|
| 1321 |
+
cess to its latent space, hence some sort of normalization is required
|
| 1322 |
+
to make sure the values are in the range [−1, 1]. Furthermore, the
|
| 1323 |
+
autoencoder should compress as much as possible without a signifi-
|
| 1324 |
+
cant loss in quality. The smaller the latent, the faster will be the inner
|
| 1325 |
+
diffusion model to process and generate.
|
| 1326 |
+
We experimented with different autoencoders, and found that di-
|
| 1327 |
+
rectly compressing the waveform can only provide around 2x-4x com-
|
| 1328 |
+
pression without a significant loss in quality. On the other hand, as we
|
| 1329 |
+
have discussed in the representation section, compressing magnitude
|
| 1330 |
+
or mel spectrograms can provide much higher compression rates. The
|
| 1331 |
+
downside is that the spectrogram requires a model (vocoder) to recon-
|
| 1332 |
+
struct the original waveform, even from a non-compressed state.
|
| 1333 |
+
In this work, we propose to use a magnitude diffusion autoencoder,
|
| 1334 |
+
an encoder (ME1d) first encodes the waveform into a magnitude spec-
|
| 1335 |
+
trogram which is then encoded into a latent compressed 64x com-
|
| 1336 |
+
pared to the original waveform, and later uses a diffusion model to re-
|
| 1337 |
+
construct the waveform conditioned on the latent, acting both as a de-
|
| 1338 |
+
terministic compressing encoder and a diffusion vocoder at the same
|
| 1339 |
+
time. In order to make sure the latent space is normalized, we use
|
| 1340 |
+
a tanh function on the bottleneck. Since the decoding/vocoding pro-
|
| 1341 |
+
cess is a diffusion model, the waveform can be quickly reconstructed
|
| 1342 |
+
from the latent by using a small step count, if instead a more accu-
|
| 1343 |
+
rate reconstruction is desired a higher step count is required. To make
|
| 1344 |
+
|
| 1345 |
+
|
| 1346 |
+
.gribrid oibus-txgt boo 2sd brs
|
| 1347 |
+
|
| 1348 |
+
I69J-fIU6 &6U619fOU 2b66q OU g 2IU&J6 CLn' gUg J91&6 COUf6Xf J6U&f'
|
| 1349 |
+
2·4·3
|
| 1350 |
+
q6COq61 fO 6xbsUg F6 I6bL626UfSIOU pSCk fO MgA61OLUU:
|
| 1351 |
+
q!2O: 2!UC6 f6 I6b626fO 12 gfCJJ COb6226' M6
|
| 1352 |
+
fO &6U61gf6 f6 JSf6Uf MIf f6Xf COUqIfIOUI U f6 2fAJ6 Ot JSf6Uf
|
| 1353 |
+
Lo &f f6 !UgJ Joq6J M6 gbbja g cgecsg& g!tn21ou &eUeifo
|
| 1354 |
+
I6UOA6 ff6UFOU PJOCK2 9Ug 26 COUAOJTIFOUSJ-OUJA 9ICJIf6CfJI6
|
| 1355 |
+
2L6 f6 q!I21OU gIfOGUCOqGI I2 &GUGLIC fO U MSA61OL JGU&' M6
|
| 1356 |
+
E&nLG I: D!!2JOU gTfOGUCOGI JUIGLGUCG
|
| 1357 |
+
oibuA
|
| 1358 |
+
EI&ILG I3: D!LEIOU STfOGUCOGL fISJUU&
|
| 1359 |
+
oibuA
|
| 1360 |
+
IUI TMHTAI HTIW HO-OTUA OIUI
|
| 1361 |
+
31
|
| 1362 |
+
.Igbom roieulib grdt lo tuqri
|
| 1363 |
+
2u& JUf6IboJSfOU gUg gbb6Ug!u& fGJU g2 ggg!fIOUJ coUf6Xxf fo f6
|
| 1364 |
+
boNtM S..
|
| 1365 |
+
oib 1
|
| 1366 |
+
g 26coUqg b2gJbJG Joq6T (s) f JUCL6g26 fU6 2gbJ6 Igf6 O1 6X-
|
| 1367 |
+
g JOM 2gbj6 Igf6 ggIO MIf g bLJgia JUog6J gUg JgfGl bagJbJ6
|
| 1368 |
+
ie el e ()
|
| 1369 |
+
2gbJI& obGifO: nbegbjG2 c9u p6 26 lOL gG1GUt bnbo262:
|
| 1370 |
+
gnfO6UCo612' MJ6L6 fJ6 6UCoq!u& 1UCfIOU J2 X6q fO p6 fJ6 qOMU-
|
| 1371 |
+
1L2' D!tt21OU bagbj612 c9U p6 266 g2 g 2b6cC cg26 Ot qitt21OU
|
| 1372 |
+
JU& GIO2' LG JUOq6I M6 bIobO26' OMGAGI MOIKe qIIGCIA U MSA6-
|
| 1373 |
+
fO 6LO fG fob JgJt Ot fG &LIg (O1 IIg&6) 2SLfIU& gf 2OG l1GdnGUcA
|
| 1374 |
+
Ot 2b6cfLo&Lg2' goMU2gbJ!& g MgA61OL cOLL62boUg2 fo 26ffIu
|
| 1375 |
+
bIOAIg6g MSA61OLI (6&: ILOU 3KH fO 8KH) EIOU F6 b612b6CfIA6
|
| 1376 |
+
MA
|
| 1377 |
+
LI&G I2: IMO-2g&G gIt2JOu &GUGiSfOL MIf geJOU gGCogG1
|
| 1378 |
+
00
|
| 1379 |
+
92i0M
|
| 1380 |
+
ibu
|
| 1381 |
+
Ewpeqtue
|
| 1382 |
+
Ewpeqqtua
|
| 1383 |
+
3s
|
| 1384 |
+
WODE2
|
| 1385 |
+
MSAG1LOU2: VffGUfIOU PJOCK gU JgL&GI COUfGXf JGU&f2 C9U JGJb b.
|
| 1386 |
+
COMIUf OL JSAGL2) OT fJ6 JUIfI9J COUAOJIFIOU9J PJOCK2 JU f6 -V6f COL-
|
| 1387 |
+
161 JOgJ!f 29J fO OFG1 JO6J2' UC162I F6 26 (CUJ
|
| 1388 |
+
etx: M6 tonug nbegiubjG12 fo Gxc6j Ou 2b66cu stg' ga If,2 JkGja gu 69e-
|
| 1389 |
+
cu &ef AGiA 8oog 16enJfe pa begb!u& guamG16 pefMGG IQx gug
|
| 1390 |
+
Debeuqi& oU f6 cobj6xif ot f6 qgtg26t qittn2jou bagbj612
|
| 1391 |
+
2.2.3E00mfo
|
| 1392 |
+
f6 I6COU2fLICFIOU 6eb6CI9JIA It b2IbI& 1O AG1 JOM 2gJbJ6
|
| 1393 |
+
qlgr ot gorsbig Isroitibbs ae bgbivorq gd e gribbadmg rs 1o
|
| 1394 |
+
E&L6 I: DI2O bL GGC6
|
| 1395 |
+
DoMU29wbr6q
|
| 1396 |
+
36nb2gwbf6q
|
| 1397 |
+
.li t
|
| 1398 |
+
fu6 r8y 2suubj6 Isf6 JGu&ry 9ug 26 fs 29bjiu bloc622 fo 16cou-
|
| 1399 |
+
DHILI& IUIGLGUC6' M6 IUfGLbOJ9f6 fJ6 JOM 2gJbJ6 CJ9UUGJ2 fO JSfCJ
|
| 1400 |
+
.b9
|
| 1401 |
+
JgfCJ62 f6 Ontbnf Jr&j 29bJ6 CJ9UU6J2 9Ug J6UC6 c9U p6 bLob6IJA
|
| 1402 |
+
o
|
| 1403 |
+
EI&6 I: DI2OU beb6L fgIUI&
|
| 1404 |
+
b9Jqm62nwod
|
| 1405 |
+
oibuA
|
| 1406 |
+
2 DIEEIOM LVLE
|
| 1407 |
+
33
|
| 1408 |
+
1 rrb2:\/&frpcoAIDIB&ACM
|
| 1409 |
+
EI&nI6 IO: D!!LI2JOU AOCOqGI IUIGGUC6
|
| 1410 |
+
W6 2b6cfLogw
|
| 1411 |
+
EI&nL6 I8: D!LeJOU AcoqGL FIJUJ&
|
| 1412 |
+
92io
|
| 1413 |
+
CouAILgubo26J
|
| 1414 |
+
orbuA
|
| 1415 |
+
J9M J6J
|
| 1416 |
+
M6 2tgCK fJ6 gqqIfIOU9J cJ9UJ6J2 OU f6 IUbf CJgUU6J2 Ot fJ6 N-V16f'
|
| 1417 |
+
COUAOJfIOU pgCK fO Ife MSAGIOL 2JsbG: 2IIJSIJa fO fG begbJG
|
| 1418 |
+
LG q!IL2IOU ACOqGI I2 fIUGg pA IL2f COUAGLFI& FG MSAGIOLIU fO
|
| 1419 |
+
boNtoM s..2
|
| 1420 |
+
gICIf6CfL6 MIf 9JO2f UO CS&6 IUfO JI&-dngJ!fA UIC AOCog6I
|
| 1421 |
+
fIou' M6 brobo26 g 2bj6 gggbrsfIou fgf 9JJOMe fo fhlU Onl n-M6f
|
| 1422 |
+
dSJIfA 8KHS I2IC AOCoqi& g16 2I JCKIU& IU f6 tOJOMI& 26C-
|
| 1423 |
+
-g i - do-- f
|
| 1424 |
+
J6i& pg26g ocoq612: ig6g Aocoq612 c ioqc6 A61 &oog
|
| 1425 |
+
f6Ug fo b1ognC6 9Lfl1gCf2' JU9KIu& fJ6 c926 1O cOUOUJA 26g g66b-
|
| 1426 |
+
i-i 2 do bo viti o .e Isivi on i iof
|
| 1427 |
+
GAGr biobeuJa fhUI& g 2b6ctlo&igJ pgck fo g bjgagpJ6 gqi MgaG-
|
| 1428 |
+
C6IA6' J9KI& fJ6U gU Ig69J I6bL626UffOU 1O gHgO &6U6L9fOU: HOM-
|
| 1429 |
+
-
|
| 1430 |
+
oitoitoM I.d.
|
| 1431 |
+
34
|
| 1432 |
+
WODE25.7 training info
|
| 1433 |
+
35
|
| 1434 |
+
In order to flatten the spectrogram, we have to match the configu-
|
| 1435 |
+
ration of the STFT used to obtain the spectrogram, with the configu-
|
| 1436 |
+
ration of the 1d transposed convolution. The key insight is that the
|
| 1437 |
+
STFT operation can be viewed as a 1D convolution with large ker-
|
| 1438 |
+
nel sizes (or window size) of sine and cosine waves, which is then
|
| 1439 |
+
merged in-place using the absolute value, and later mel-scaled. The
|
| 1440 |
+
mel-scaling doesn’t alter the temporal positioning, only the frequency
|
| 1441 |
+
(or channels) of the spectrogram. Hence, if we set large kernel sizes
|
| 1442 |
+
equivalent to the STFT window length, strides equivalent to the STFT
|
| 1443 |
+
hop-length, and proper padding, the transposed convolution will fo-
|
| 1444 |
+
cus on the same context region of the waveform used to obtain the
|
| 1445 |
+
spectrogram. Similarly, we will set the input channels of the trans-
|
| 1446 |
+
posed convolution to match the number of channels used for the mel-
|
| 1447 |
+
spectrogram, and the output channels to 1. Stereo audio is decoded
|
| 1448 |
+
by batching. We used a window-length/kernel-size of 1024 and hop-
|
| 1449 |
+
length/stride of 256, similarly to popular vocoders we used 80 mel-
|
| 1450 |
+
spectrogram channels. With this configuration, the spectrogram has a
|
| 1451 |
+
default 3.2x compression factor over the initial waveform.
|
| 1452 |
+
5.6.3
|
| 1453 |
+
Evaluation
|
| 1454 |
+
This model can produce high quality waveform, as with other mod-
|
| 1455 |
+
els, a good reconstruction of high-frequencies requires more convolu-
|
| 1456 |
+
tional blocks towards the start of the U-Net. Moreover, we hypothe-
|
| 1457 |
+
size that increasing the number of mel-channels might increase qual-
|
| 1458 |
+
ity for two reasons: first, mel-spectrogram would compress less infor-
|
| 1459 |
+
mation out of the initial waveform, and second, the transposed con-
|
| 1460 |
+
volution would have more channels to flatten the spectrogram and
|
| 1461 |
+
hence more capacity.
|
| 1462 |
+
5.7
|
| 1463 |
+
training info
|
| 1464 |
+
5.7.1
|
| 1465 |
+
Data
|
| 1466 |
+
We trained all of our models on a 2500h mix of audio at 48kHz. In
|
| 1467 |
+
the text-based model, we used metadata such as title, genre, album
|
| 1468 |
+
and artist as conditioning information. For the autoencoder, upsam-
|
| 1469 |
+
pler, vocoder, we trained on random crops of length 218 (∼5.5s at
|
| 1470 |
+
48kHz). For the long-context text-conditional audio generation model,
|
| 1471 |
+
we trained on fixed crops of length 221 (∼44s at 48kHz), using the crop
|
| 1472 |
+
index as additional conditioning information.
|
| 1473 |
+
5.7.2
|
| 1474 |
+
Training
|
| 1475 |
+
We trained all of our models with AdamW, using a learning rate of
|
| 1476 |
+
10−4, β1 = 0.95, β2 = 0.999, ǫ = 10−6, and wight decay of 10−3. For
|
| 1477 |
+
|
| 1478 |
+
36
|
| 1479 |
+
models
|
| 1480 |
+
all models, we used an exponential moving average with β = 0.995
|
| 1481 |
+
and power of 0.7. We trained all models for around 1M steps with
|
| 1482 |
+
a batch size of 32, this takes approximately 1 week on a single A100
|
| 1483 |
+
GPU for the largest, text-conditional model.
|
| 1484 |
+
|
| 1485 |
+
6
|
| 1486 |
+
F U T U R E W O R K
|
| 1487 |
+
While our models can have a good generation quality on short few-
|
| 1488 |
+
second segments, or a good structure with longer segments, training
|
| 1489 |
+
an efficient model with both high quality and long context remains
|
| 1490 |
+
an open problem. A few promising future modelling approaches that
|
| 1491 |
+
need more experimentation include: (1) train diffusion models using
|
| 1492 |
+
perceptual losses on the waveforms instead of L2, this might help to
|
| 1493 |
+
decrease the initial size of the U-Net, as we wouldn’t have to pro-
|
| 1494 |
+
cess non-percieveable sounds, (2) stack multiple upsamplers to gen-
|
| 1495 |
+
erate a song top-down from low-sample rates to high sample rates,
|
| 1496 |
+
(3) improve the quality of the diffusion autoencoder by using mel-
|
| 1497 |
+
spectrograms instead of magnitude spectrograms as input, (4) other
|
| 1498 |
+
types of conditioning which are not text-based might be useful to nav-
|
| 1499 |
+
igate the audio latent space, which is often hard to describe in words
|
| 1500 |
+
- DreamBooth-like models [14] could be used to assign symbols to
|
| 1501 |
+
sounds, (5) compress mel-spectrograms to a quantized representation
|
| 1502 |
+
with diffusion autoencoders to allow for high compression ratios and
|
| 1503 |
+
later train an autoregressive transformer on top of that.
|
| 1504 |
+
Other simpler improvements on the current models include: (1) in-
|
| 1505 |
+
crease the training data from 2k hours to 60k-100k hours, (2) use
|
| 1506 |
+
more sophisticated diffusion samplers to get higher quality for the
|
| 1507 |
+
same number of sampling steps, (3) for text-based models, use larger
|
| 1508 |
+
pretrained language to obtain embeddings, which has been shown to
|
| 1509 |
+
be very important for quality in [15].
|
| 1510 |
+
37
|
| 1511 |
+
|
| 1512 |
+
|
| 1513 |
+
7
|
| 1514 |
+
C O N C L U S I O N
|
| 1515 |
+
Generating high-quality audio efficiently is a challenging task as it in-
|
| 1516 |
+
volves the generation of numerous values to accurately represent the
|
| 1517 |
+
sound waves, especially when aiming for high-fidelity stereo sound at
|
| 1518 |
+
a sample rate of 48kHz. In this work, we proposed different methods
|
| 1519 |
+
and models to generate high quality audio from a textual descrip-
|
| 1520 |
+
tion. From models targeting long-context audio with an emphasis on
|
| 1521 |
+
structure, short-context with an emphasis on quality, to other useful
|
| 1522 |
+
models such as the diffusion upsampler and vocoder. We introduced
|
| 1523 |
+
a new method that utilizes text-conditional diffusion models based on
|
| 1524 |
+
1D U-Nets, allowing for the generation of multiple minutes of 48kHz
|
| 1525 |
+
audio on a single consumer GPU. Furthermore, we have provided a
|
| 1526 |
+
collection of open-source libraries to streamline future research, in-
|
| 1527 |
+
cluding potential improvements in audio autoencoders and diffusion
|
| 1528 |
+
models.
|
| 1529 |
+
39
|
| 1530 |
+
|
| 1531 |
+
|
| 1532 |
+
B I B L I O G R A P H Y
|
| 1533 |
+
[1]
|
| 1534 |
+
Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov,
|
| 1535 |
+
Olivier Pietquin, Matt Sharifi, Olivier Teboul, David Grangier,
|
| 1536 |
+
Marco Tagliasacchi, and Neil Zeghidour. AudioLM: a Language
|
| 1537 |
+
Modeling Approach to Audio Generation. 2022. eprint: arXiv:2209.
|
| 1538 |
+
03143.
|
| 1539 |
+
[2]
|
| 1540 |
+
Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook
|
| 1541 |
+
Kim, Alec Radford, and Ilya Sutskever. Jukebox: A Generative
|
| 1542 |
+
Model for Music. 2020. eprint: arXiv:2005.00341.
|
| 1543 |
+
[3]
|
| 1544 |
+
Jonathan Ho, Ajay Jain, and Pieter Abbeel. “Denoising diffu-
|
| 1545 |
+
sion probabilistic models.” In: Advances in Neural Information
|
| 1546 |
+
Processing Systems 33 (Dec. 2020), pp. 6840–6851.
|
| 1547 |
+
[4]
|
| 1548 |
+
Jonathan Ho and Tim Salimans. Classifier-Free Diffusion Guid-
|
| 1549 |
+
ance. 2022. eprint: arXiv:2207.12598.
|
| 1550 |
+
[5]
|
| 1551 |
+
Diederik P Kingma and Max Welling. Auto-Encoding Variational
|
| 1552 |
+
Bayes. 2013. eprint: arXiv:1312.6114.
|
| 1553 |
+
[6]
|
| 1554 |
+
Troy Luhman and Eric Luhman. Improving Diffusion Model Effi-
|
| 1555 |
+
ciency Through Patching. 2022. eprint: arXiv:2207.04316.
|
| 1556 |
+
[7]
|
| 1557 |
+
Ozan Oktay et al. Attention U-Net: Learning Where to Look for the
|
| 1558 |
+
Pancreas. 2018. eprint: arXiv:1804.03999.
|
| 1559 |
+
[8]
|
| 1560 |
+
Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Si-
|
| 1561 |
+
monyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew
|
| 1562 |
+
Senior, and Koray Kavukcuoglu. WaveNet: A Generative Model
|
| 1563 |
+
for Raw Audio. 2016. eprint: arXiv:1609.03499.
|
| 1564 |
+
[9]
|
| 1565 |
+
Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu.
|
| 1566 |
+
Neural Discrete Representation Learning. 2017. eprint: arXiv:1711.
|
| 1567 |
+
00937.
|
| 1568 |
+
[10]
|
| 1569 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sha-
|
| 1570 |
+
ran Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J.
|
| 1571 |
+
Liu. Exploring the Limits of Transfer Learning with a Unified Text-
|
| 1572 |
+
to-Text Transformer. 2019. eprint: arXiv:1910.10683.
|
| 1573 |
+
[11]
|
| 1574 |
+
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu,
|
| 1575 |
+
and Mark Chen. Hierarchical Text-Conditional Image Generation
|
| 1576 |
+
with CLIP Latents. 2022. eprint: arXiv:2204.06125.
|
| 1577 |
+
[12]
|
| 1578 |
+
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick
|
| 1579 |
+
Esser, and Björn Ommer. High-Resolution Image Synthesis with
|
| 1580 |
+
Latent Diffusion Models. 2021. eprint: arXiv:2112.10752.
|
| 1581 |
+
[13]
|
| 1582 |
+
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net:
|
| 1583 |
+
Convolutional Networks for Biomedical Image Segmentation. 2015.
|
| 1584 |
+
eprint: arXiv:1505.04597.
|
| 1585 |
+
41
|
| 1586 |
+
|
| 1587 |
+
42
|
| 1588 |
+
bibliography
|
| 1589 |
+
[14]
|
| 1590 |
+
Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael
|
| 1591 |
+
Rubinstein, and Kfir Aberman. DreamBooth: Fine Tuning Text-to-
|
| 1592 |
+
Image Diffusion Models for Subject-Driven Generation. 2022. eprint:
|
| 1593 |
+
arXiv:2208.12242.
|
| 1594 |
+
[15]
|
| 1595 |
+
Chitwan Saharia et al. Photorealistic Text-to-Image Diffusion Mod-
|
| 1596 |
+
els with Deep Language Understanding. 2022. eprint: arXiv:2205.
|
| 1597 |
+
11487.
|
| 1598 |
+
[16]
|
| 1599 |
+
Tim Salimans and Jonathan Ho. Progressive Distillation for Fast
|
| 1600 |
+
Sampling of Diffusion Models. 2022. eprint: arXiv:2202.00512.
|
| 1601 |
+
[17]
|
| 1602 |
+
Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan,
|
| 1603 |
+
and Surya Ganguli. Deep Unsupervised Learning using Nonequi-
|
| 1604 |
+
librium Thermodynamics. 2015. eprint: arXiv:1503.03585.
|
| 1605 |
+
[18]
|
| 1606 |
+
Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Dif-
|
| 1607 |
+
fusion Implicit Models. 2020. eprint: arXiv:2010.02502.
|
| 1608 |
+
[19]
|
| 1609 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
|
| 1610 |
+
Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polo-
|
| 1611 |
+
sukhin. Attention Is All You Need. 2017. eprint: arXiv : 1706 .
|
| 1612 |
+
03762.
|
| 1613 |
+
|
AdFQT4oBgHgl3EQfMTYr/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
CdFAT4oBgHgl3EQftB4M/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6bdba68cccb191522643532c09923483c692dd2dc033f29bad3d7da940ab4e9c
|
| 3 |
+
size 152637
|
CtE1T4oBgHgl3EQf9wbT/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:955cce9f369460f2443c386e186e8e81551181abce0d58d7a4721fce2ca3a030
|
| 3 |
+
size 261912
|
GNE1T4oBgHgl3EQf_AZD/content/tmp_files/2301.03575v1.pdf.txt
ADDED
|
@@ -0,0 +1,3147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
1
|
| 2 |
+
This work has been submitted to the IEEE for possible
|
| 3 |
+
publication. Copyright may be transferred without notice, after
|
| 4 |
+
which this version may no longer be accessible.
|
| 5 |
+
arXiv:2301.03575v1 [cs.IT] 9 Jan 2023
|
| 6 |
+
|
| 7 |
+
On the Coexistence of eMBB and URLLC in
|
| 8 |
+
Multi-cell Massive MIMO
|
| 9 |
+
Giovanni Interdonato, Member, IEEE, Stefano Buzzi, Senior Member, IEEE, Carmen D’Andrea, Member, IEEE,
|
| 10 |
+
Luca Venturino, Senior Member, IEEE, Ciro D’Elia, and Paolo Vendittelli
|
| 11 |
+
Abstract—The non-orthogonal coexistence between the en-
|
| 12 |
+
hanced mobile broadband (eMBB) and the ultra-reliable low-
|
| 13 |
+
latency communication (URLLC) in the downlink of a multi-
|
| 14 |
+
cell massive MIMO system is rigorously analyzed in this work.
|
| 15 |
+
We provide a unified information-theoretic framework blending
|
| 16 |
+
an infinite-blocklength analysis of the eMBB spectral efficiency
|
| 17 |
+
(SE) in the ergodic regime with a finite-blocklength analysis of
|
| 18 |
+
the URLLC error probability relying on the use of mismatched
|
| 19 |
+
decoding, and of the so-called saddlepoint approximation. Punc-
|
| 20 |
+
turing (PUNC) and superposition coding (SPC) are considered
|
| 21 |
+
as alternative downlink coexistence strategies to deal with the
|
| 22 |
+
inter-service interference, under the assumption of only statistical
|
| 23 |
+
channel state information (CSI) knowledge at the users. eMBB
|
| 24 |
+
and URLLC performances are then evaluated over different
|
| 25 |
+
precoding techniques and power control schemes, by accounting
|
| 26 |
+
for imperfect CSI knowledge at the base stations, pilot-based
|
| 27 |
+
estimation overhead, pilot contamination, spatially correlated
|
| 28 |
+
channels, the structure of the radio frame, and the characteristics
|
| 29 |
+
of the URLLC activation pattern. Simulation results reveal
|
| 30 |
+
that SPC is, in many operating regimes, superior to PUNC
|
| 31 |
+
in providing higher SE for the eMBB yet achieving the target
|
| 32 |
+
reliability for the URLLC with high probability. Moreover, PUNC
|
| 33 |
+
might cause eMBB service outage in presence of high URLLC
|
| 34 |
+
traffic loads. However, PUNC turns to be necessary to preserve
|
| 35 |
+
the URLLC performance in scenarios where the multi-user
|
| 36 |
+
interference cannot be satisfactorily alleviated.
|
| 37 |
+
Index Terms—Enhanced Mobile Broadband, Error Probabil-
|
| 38 |
+
ity, Massive MIMO, Mismatched Decoding, Network Availability,
|
| 39 |
+
Non-Orthogonal Multiple Access, Puncturing, Saddlepoint Ap-
|
| 40 |
+
proximation, Spectral Efficiency, Superposition Coding, Ultra-
|
| 41 |
+
Reliable Low-Latency Communications.
|
| 42 |
+
I. INTRODUCTION
|
| 43 |
+
W
|
| 44 |
+
ITH the advent of the mobile application ecosystem
|
| 45 |
+
and the resulting increase of the data-processing and
|
| 46 |
+
storage capabilities of the smart devices, several heterogeneous
|
| 47 |
+
services have emerged setting various stringent communication
|
| 48 |
+
requirements in terms of data rates, latency, reliability and
|
| 49 |
+
massive connectivity. These requirements and related use cases
|
| 50 |
+
have been summarized by the 3rd Generation Partnership
|
| 51 |
+
Project (3GPP) into three macro services, namely enhanced
|
| 52 |
+
This work was supported by the Ministero delle Imprese e del Made in
|
| 53 |
+
Italy (former MISE) within the project “Smart Urban Mobility Management”
|
| 54 |
+
(5G-SUMMA), Asse II, Supporto alle Tecnologie Emergenti.
|
| 55 |
+
G. Interdonato, S. Buzzi, C. D’Andrea, L. Venturino and C. D’Elia are
|
| 56 |
+
with the Department of Electrical and Information Engineering, University of
|
| 57 |
+
Cassino and Southern Latium, 03043 Cassino, Italy. They are also affiliated
|
| 58 |
+
with Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT),
|
| 59 |
+
43124 Parma, Italy. P. Vendittelli is with TIM S.p.A., 20133 Milan, Italy. S.
|
| 60 |
+
Buzzi is also affiliated with Politecnico di Milano, 20133 Milan, Italy.
|
| 61 |
+
Corresponding author: Giovanni Interdonato.
|
| 62 |
+
mobile broadband (eMBB), ultra-reliable low-latency com-
|
| 63 |
+
munications (URLLC) and massive machine-type communica-
|
| 64 |
+
tions (mMTC) [1]. eMBB services require high-peak data-rate
|
| 65 |
+
and stable connectivity, and include most of the everyday us-
|
| 66 |
+
age applications: entertainment, multimedia, communication,
|
| 67 |
+
collaboration, mapping, web-surfing, etc. URLLC services
|
| 68 |
+
require an one-way radio latency of 1 ms with 99.999%
|
| 69 |
+
success probability, and include real-time and time-critical
|
| 70 |
+
applications, such as autonomous driving, automation control,
|
| 71 |
+
augmented reality, video and image processing, etc. mMTC
|
| 72 |
+
services enable connectivity between a vast number of miscel-
|
| 73 |
+
laneous devices, and include applications such as smart grids,
|
| 74 |
+
traffic management systems, environmental monitoring, etc.
|
| 75 |
+
5G started to roll out variously as an eMBB service,
|
| 76 |
+
essentially like a faster version of LTE, whereas mMTC
|
| 77 |
+
and URLLC requirements continue to be refined and will
|
| 78 |
+
materialize within the next decade, although some experimen-
|
| 79 |
+
tal activities are already taking place in many parts of the
|
| 80 |
+
world1. Academic research and industrial standardization is
|
| 81 |
+
currently interested at different coexistence mechanisms for
|
| 82 |
+
such heterogeneous services, apparently moving apart from
|
| 83 |
+
the initial vision of a sliced network [2]. Slicing the network
|
| 84 |
+
basically means allocating orthogonal resources (storage, com-
|
| 85 |
+
puting, radio communications, etc.) to heterogeneous services
|
| 86 |
+
so that to guarantee their mutual isolation. This approach
|
| 87 |
+
is, in broad sense, generally known as orthogonal multiple
|
| 88 |
+
access (OMA). As an interesting alternative to orthogonal
|
| 89 |
+
resource allocation, non-orthogonal OMA (NOMA) is gaining
|
| 90 |
+
increasing importance especially with respect to the allocation
|
| 91 |
+
of the radio access network (RAN) communication resources.
|
| 92 |
+
The conventional approach to slice the RAN is to separate
|
| 93 |
+
eMBB, mMTC, and URLLC services in time and/or frequency
|
| 94 |
+
domains, whereas NOMA relies on efficient coexistence strate-
|
| 95 |
+
gies wherein heterogeneous services share the same time-
|
| 96 |
+
frequency resources, being separated in the power and spatial
|
| 97 |
+
domain. In this regard, the terminology Heterogeneous OMA
|
| 98 |
+
(H-OMA) is often adopted [2] to distinguish the orthogonal
|
| 99 |
+
resource allocation of heterogeneous services from that of the
|
| 100 |
+
same type, referred to as OMA. (The same distinction applies
|
| 101 |
+
to H-NOMA with respect to NOMA.)
|
| 102 |
+
Massive MIMO [3]–[5] is a technology that uses a very
|
| 103 |
+
large number of co-located antennas at the base stations (BSs)
|
| 104 |
+
to coherently and simultaneously serve multiple users over
|
| 105 |
+
1See, e.g., the funding programs from the Italian former Ministry of
|
| 106 |
+
Economic Development, as well as those of other European Countries, the
|
| 107 |
+
EU, USA, China and Japan.
|
| 108 |
+
|
| 109 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 110 |
+
3
|
| 111 |
+
the same radio resources. The users are multiplexed in the
|
| 112 |
+
spatial domain by using beamforming techniques that enable
|
| 113 |
+
high-directivity transmission and reception. The use of many
|
| 114 |
+
antennas also triggers the favorable propagation which further
|
| 115 |
+
reduces the multi-user interference and the channel hardening
|
| 116 |
+
which reduces the random fluctuations of the effective channel
|
| 117 |
+
gain. As a consequence, there is no need to adopt intri-
|
| 118 |
+
cate signal processing techniques to deal with the multi-user
|
| 119 |
+
interference. Such an aggressive spatial multiplexing along
|
| 120 |
+
with the intrinsic practicality and scalability of the massive
|
| 121 |
+
MIMO technology leads to high levels of energy and spectral
|
| 122 |
+
efficiency, spatial diversity, link reliability and connectivity.
|
| 123 |
+
The primary focus of the massive MIMO research has
|
| 124 |
+
been on increasing the user data rates, thereby targeting the
|
| 125 |
+
eMBB requirements. Lately, some studies have highlighted the
|
| 126 |
+
significant benefits that massive MIMO is able to provide to
|
| 127 |
+
URLLC [6]–[8] by reducing the outage and error probability,
|
| 128 |
+
and therefore increasing the link reliability. Higher reliability
|
| 129 |
+
results to less retransmissions which, in turn, translates to
|
| 130 |
+
a lower latency. mMTC also benefits from massive MIMO
|
| 131 |
+
technology [7], [9] by capitalizing on the high energy effi-
|
| 132 |
+
ciency to increase devices’ battery lifetime. Besides, favorable
|
| 133 |
+
propagation enables an aggressive spatial multiplexing of the
|
| 134 |
+
mMTC devices, facilitating the detection and the random
|
| 135 |
+
access procedures.
|
| 136 |
+
A. RELATED WORKS
|
| 137 |
+
Coexistence between heterogeneous services has been ini-
|
| 138 |
+
tially studied in systems wherein a single-antenna BS serves
|
| 139 |
+
multiple heterogeneous users. In [2], Popovski et al. proposed
|
| 140 |
+
a first tractable communication-theoretic model that captures
|
| 141 |
+
the key features of eMBB, URLLC and mMTC traffic. (These
|
| 142 |
+
features are summarized in Table I.) Specifically, [2] analyzes
|
| 143 |
+
two scenarios for a single-cell model: (i) slicing for URLLC
|
| 144 |
+
and eMBB, and (ii) slicing for mMTC and eMBB. The
|
| 145 |
+
downlink multiplexing of URLLC and eMBB is studied in [10]
|
| 146 |
+
with the goal of maximizing the utility of the eMBB traffic
|
| 147 |
+
while satisfying the quality of service requirements of the
|
| 148 |
+
URLLC traffic, and by abstracting the operation at the physical
|
| 149 |
+
layer. Coexistence mechanisms between URLLC and eMBB
|
| 150 |
+
traffic, based on the puncturing technique, have been proposed
|
| 151 |
+
in [11] for the uplink of a multi-cell network wherein a
|
| 152 |
+
simplified Wyner channel model with no fading was assumed.
|
| 153 |
+
As for multi-user MIMO systems, in [12] a null-space-based
|
| 154 |
+
spatial preemptive scheduler for joint URLLC and eMBB
|
| 155 |
+
traffic is proposed for cross-objective optimization, where the
|
| 156 |
+
critical URLLC quality-of-service (QoS) is guaranteed while
|
| 157 |
+
maximizing the eMBB ergodic capacity. The spatial degrees of
|
| 158 |
+
freedom at the BS are leveraged to fulfill the URLLC decoding
|
| 159 |
+
requirements without jeopardizing the performance of the
|
| 160 |
+
eMBB users. A similar study but for a distributed setup was
|
| 161 |
+
conducted in [13] where a joint user association and resource
|
| 162 |
+
allocation problem is formulated for the downlink of a fog
|
| 163 |
+
network, considering the coexistence of URLLC and eMBB
|
| 164 |
+
services for internet-of-things (IoT) applications. An analytic
|
| 165 |
+
hierarchy process was proposed for setting the priorities of the
|
| 166 |
+
services and to formulate a two-sided matching game where a
|
| 167 |
+
stable association between the fog network infrastructure and
|
| 168 |
+
IoT devices is established.
|
| 169 |
+
The coexistence between eMBB and URLLC is of most
|
| 170 |
+
interest [14]–[18], and is mainly handled with three alternative
|
| 171 |
+
techniques, herein listed in descending order of complexity:
|
| 172 |
+
• successive interference cancellation (SIC), with which
|
| 173 |
+
the receiver iteratively decode and remove the contribu-
|
| 174 |
+
tions of a specific service from the cumulative received
|
| 175 |
+
signal. This approach requires that the receiver has access
|
| 176 |
+
to the channel state information (CSI) to be able to
|
| 177 |
+
perform the multi-stage decoding, with decreasing lev-
|
| 178 |
+
els of interference, to the required successful decoding
|
| 179 |
+
probability.
|
| 180 |
+
• puncturing (PUNC), consisting in preventing the inter-
|
| 181 |
+
service interference. In the downlink, whenever the trans-
|
| 182 |
+
mitter has to transmit a URLLC signal, then the eMBB
|
| 183 |
+
signals are dropped over the channel uses involved by th
|
| 184 |
+
URLLC transmission. In the uplink, the receiver uses an
|
| 185 |
+
erasure decoder to discard the eMBB signals, provided
|
| 186 |
+
that it is able to detect the presence of URLLC transmis-
|
| 187 |
+
sions, e.g., via energy detection.
|
| 188 |
+
• superposition coding (SPC), with which the transmitter
|
| 189 |
+
simply sends a linear combination of eMBB and URLLC
|
| 190 |
+
signals. At the receiver, both for the uplink and the
|
| 191 |
+
downlink, the inter-service interference is treated as un-
|
| 192 |
+
correlated noise (TIN). Again, this approach requires the
|
| 193 |
+
receiver to be able to detect the presence of the undesired
|
| 194 |
+
transmissions.
|
| 195 |
+
In [14] the coexistence of URLLC and eMBB services in the
|
| 196 |
+
uplink of a C-RAN architecture with shared analog fronthaul
|
| 197 |
+
links is analyzed, accounting for SIC, puncturing, and TIN.
|
| 198 |
+
This work provides an information-theoretic study in the
|
| 199 |
+
performance of URLLC and eMBB traffic under both H-OMA
|
| 200 |
+
and H-NOMA, by considering standard cellular models with
|
| 201 |
+
additive Gaussian noise links and a finite inter-cell interfer-
|
| 202 |
+
ence. The main conclusions are that NOMA achieves higher
|
| 203 |
+
eMBB rates with respect to H-OMA, while guaranteeing
|
| 204 |
+
reliable low-rate URLLC communication with minimal access
|
| 205 |
+
latency. Moreover, H-NOMA under SIC is seen to achieve
|
| 206 |
+
the best performance, while, unlike the case with digital
|
| 207 |
+
capacity-constrained fronthaul links, TIN always outperforms
|
| 208 |
+
puncturing. A similar analysis is conducted in [11] including
|
| 209 |
+
both uplink and downlink of C-RAN without analog fronthaul
|
| 210 |
+
but considering practical aspects, such as fading, the lack
|
| 211 |
+
of CSI for URLLC transmitters, rate adaptation for eMBB
|
| 212 |
+
transmitters and finite fronthaul capacity. Abreu et al. in [16]
|
| 213 |
+
analyzes both the H-OMA and H-NOMA options for eMBB
|
| 214 |
+
traffic, and grant-free URLLC in the uplink accounting for
|
| 215 |
+
minimum mean square error (MMSE) receivers with and
|
| 216 |
+
without SIC, and under the assumption of Rayleigh fading
|
| 217 |
+
channels. The resulting outage probability and achievable
|
| 218 |
+
rates show that TIN is mostly beneficial in sufficiently high-
|
| 219 |
+
SNR regime when SIC is employed or, in some cases, with
|
| 220 |
+
low URLLC load. Otherwise, H-OMA supports higher loads
|
| 221 |
+
for both services simultaneously. Recently, [17] proposed an
|
| 222 |
+
approach to improve the supported loads for URLLC in the
|
| 223 |
+
uplink, for both H-OMA and H-NOMA in presence of eMBB
|
| 224 |
+
|
| 225 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 226 |
+
4
|
| 227 |
+
TABLE I
|
| 228 |
+
FEATURES OF THE 5G USE CASES
|
| 229 |
+
eMBB
|
| 230 |
+
URLLC
|
| 231 |
+
mMTC
|
| 232 |
+
characteristics
|
| 233 |
+
high rate, moderate reliability
|
| 234 |
+
low latency, ultra reliability, low rate
|
| 235 |
+
low rate, large connectivity
|
| 236 |
+
traffic
|
| 237 |
+
large payload, several devices
|
| 238 |
+
small payload, few devices
|
| 239 |
+
small payload, massive devices
|
| 240 |
+
activation pattern
|
| 241 |
+
stable
|
| 242 |
+
intermittent
|
| 243 |
+
intermittent
|
| 244 |
+
time span
|
| 245 |
+
long, multiple resources
|
| 246 |
+
short, slot
|
| 247 |
+
long, multiple resources
|
| 248 |
+
frequency span
|
| 249 |
+
single/multiple resources
|
| 250 |
+
multiple resources, diversity
|
| 251 |
+
single resource
|
| 252 |
+
scheduling
|
| 253 |
+
to prevent access collision
|
| 254 |
+
for high reliability
|
| 255 |
+
infeaseable
|
| 256 |
+
random access
|
| 257 |
+
if needed
|
| 258 |
+
to support intermittency
|
| 259 |
+
fundamental
|
| 260 |
+
target
|
| 261 |
+
maximize data rate
|
| 262 |
+
meet latency and reliability requirements
|
| 263 |
+
maximize supported arrival rate
|
| 264 |
+
reliability requirement
|
| 265 |
+
∼ 10−3
|
| 266 |
+
∼ 10−5
|
| 267 |
+
∼ 10−1
|
| 268 |
+
applications
|
| 269 |
+
video streaming, augmented reality,
|
| 270 |
+
entertainment
|
| 271 |
+
connected factories, traffic safety, au-
|
| 272 |
+
tonomous vehicles, telemedicine
|
| 273 |
+
internet of things, low-power sensors,
|
| 274 |
+
smart cities
|
| 275 |
+
traffic, showing the superiority of H-NOMA in ensuring the
|
| 276 |
+
reliability requirements of both the services. A similar analysis
|
| 277 |
+
but for the downlink is conducted in [18], [19] where optimal
|
| 278 |
+
resource allocation strategies and H-NOMA are combined to
|
| 279 |
+
satisfy the eMBB and URLLC QoS constraints, under the
|
| 280 |
+
assumption of perfect eMBB CSI and statistical URLLC CSI
|
| 281 |
+
knowledge.
|
| 282 |
+
The information-theoretic framework used by the afore-
|
| 283 |
+
mentioned works to characterize the performance achieved by
|
| 284 |
+
eMBB and URLLC users cannot be applied to massive MIMO
|
| 285 |
+
scenarios, for different reasons. Establishing the rate (or the
|
| 286 |
+
spectral efficiency) of the eMBB users in the ergodic (infinite-
|
| 287 |
+
blocklength) regime, upon the block-fading channel model, is
|
| 288 |
+
sound as the eMBB codewords spans an infinite number of
|
| 289 |
+
independent fading realizations. Nevertheless, as per the per-
|
| 290 |
+
formance of the URLLC users in a quasi-static fading scenario,
|
| 291 |
+
the use of the outage capacity, whose analysis includes infinite-
|
| 292 |
+
blocklength assumptions, leads to an inaccurate evaluation
|
| 293 |
+
of the error probability, as demonstrated in [8]. In addition,
|
| 294 |
+
outage capacity analyses do not capture the effects of the
|
| 295 |
+
CSI acquisition overhead when pilots are used to estimate the
|
| 296 |
+
uplink channel. As an alternative, finite-blocklength analyses
|
| 297 |
+
have been proposed for URLLC in conventional cellular
|
| 298 |
+
networks [18], [19], co-located massive MIMO networks [20],
|
| 299 |
+
[21] and cell-free massive MIMO networks [22], and rely on
|
| 300 |
+
the information-theoretic bounds and tools developed in [23],
|
| 301 |
+
e.g., the well known normal approximation. However, the
|
| 302 |
+
work in [8] proved that the normal approximation is not
|
| 303 |
+
accurate in the region of low error probabilities of interest
|
| 304 |
+
in URLLC (<10−4), especially as the number of antennas at
|
| 305 |
+
the BS increases, and in presence of imperfect CSI. Impor-
|
| 306 |
+
tantly, ¨Ostman et al. in [8] provided a more rigorous finite-
|
| 307 |
+
blocklength information-theoretic framework relying on the
|
| 308 |
+
use of a mismatched decoding [24], and of the saddlepoint
|
| 309 |
+
approximation [25] for evaluating the error probability of the
|
| 310 |
+
URLLC users in co-located massive MIMO systems. This
|
| 311 |
+
framework, priory developed for wireless fading channels
|
| 312 |
+
in [26]–[28], accounts for linear signal processing, imperfect
|
| 313 |
+
CSI and instantaneous channel estimation error, and additive
|
| 314 |
+
uncorrelated noise including multi-user interference. However,
|
| 315 |
+
the analysis of [8] is limited to the URLLC regime, and the
|
| 316 |
+
coexistence with the eMBB is yet to be investigated under a
|
| 317 |
+
unified information-theoretic framework.
|
| 318 |
+
B. CONTRIBUTIONS
|
| 319 |
+
Our contributions can be summarized as follows.
|
| 320 |
+
• We investigate the non-orthogonal multiplexing of the
|
| 321 |
+
eMBB and the URLLC, in the downlink of a multi-
|
| 322 |
+
cell
|
| 323 |
+
massive
|
| 324 |
+
MIMO
|
| 325 |
+
system,
|
| 326 |
+
by
|
| 327 |
+
providing
|
| 328 |
+
a
|
| 329 |
+
uni-
|
| 330 |
+
fied information-theoretic framework that combines an
|
| 331 |
+
infinite-blocklength analysis to assess the SE of the
|
| 332 |
+
eMBB and a finite-blocklength analysis to assess the error
|
| 333 |
+
probability of the URLLC.
|
| 334 |
+
• Unlike prior works wherein the URLLC performance
|
| 335 |
+
is inappropriately evaluated by the use of the outage
|
| 336 |
+
capacity analysis or the error probability obtained via the
|
| 337 |
+
normal approximation, in this work the finite-blocklength
|
| 338 |
+
information-theoretic analysis relies on the results and
|
| 339 |
+
tools established in [8], where mismatched receivers
|
| 340 |
+
and saddlepoint approximation are assumed, but the
|
| 341 |
+
coexistence between and URLLC and eMBB was not
|
| 342 |
+
investigated.
|
| 343 |
+
• The proposed unified framework accommodates two al-
|
| 344 |
+
ternative coexistence strategies: PUNC and SPC. The
|
| 345 |
+
former prevents the inter-service interference to protect
|
| 346 |
+
the URLLC reliability, whereas the latter accepts it to
|
| 347 |
+
maintain the eMBB service. In addition, the analytical
|
| 348 |
+
framework accounts for imperfect CSI acquisition at the
|
| 349 |
+
BSs via uplink pilot transmissions, pilot contamination
|
| 350 |
+
and pilot overhead, spatially correlated channels and the
|
| 351 |
+
lack of CSI at the users.
|
| 352 |
+
• We numerically evaluate the performance achieved by
|
| 353 |
+
PUNC and SPC under different precoding schemes,
|
| 354 |
+
namely maximum ratio, regularized zero-forcing and
|
| 355 |
+
multi-cell MMSE, and different power allocation strate-
|
| 356 |
+
gies, i.e., equal power allocation, weighted fractional
|
| 357 |
+
power allocation and optimal power allocation maxi-
|
| 358 |
+
mizing the product SINR throughout the network. The
|
| 359 |
+
coexistence between eMBB and URLLC is explored
|
| 360 |
+
in various scenarios, including different configurations
|
| 361 |
+
of the time-division duplex radio frame, and different
|
| 362 |
+
URLLC random activation patterns.
|
| 363 |
+
• The results of our comprehensive simulation campaign
|
| 364 |
+
highlight the clear superiority of SPC over PUNC in
|
| 365 |
+
most of the considered operating regimes. The main
|
| 366 |
+
limitation of SPC, namely the caused multi-user inter-
|
| 367 |
+
ference, is often overcome by using regularized zero-
|
| 368 |
+
forcing and multi-cell MMSE, which in turn hinge on a
|
| 369 |
+
|
| 370 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 371 |
+
5
|
| 372 |
+
high-quality CSI acquisition. Whenever these precoding
|
| 373 |
+
techniques cannot be implemented due to complexity or
|
| 374 |
+
hardware constraints, the URLLC reliability requirements
|
| 375 |
+
can be met by fine-tuning the parameters of the pro-
|
| 376 |
+
posed weighted fractional power allocation. Conversely,
|
| 377 |
+
performing PUNC is necessary to preserve the URLLC
|
| 378 |
+
performance if the interference cancellation via precoding
|
| 379 |
+
is ineffective, for instance, when pilot contamination is
|
| 380 |
+
high or the multi-user interference is excessive.
|
| 381 |
+
• Pilot contamination among URLLC users is particularly
|
| 382 |
+
destructive. This led us to devise a pilot assignment policy
|
| 383 |
+
that prioritizes the URLLC users. In our approach, we
|
| 384 |
+
primarily assign unique orthogonal pilots to the URLLC
|
| 385 |
+
users, admitting pilot reuse only among eMBB users.
|
| 386 |
+
If doable, orthogonal pilots are assigned within cells
|
| 387 |
+
to prevent the intra-cell pilot contamination, and if the
|
| 388 |
+
uplink training length is sufficiently large, then mutually
|
| 389 |
+
orthogonal pilots are guaranteed to everyone.
|
| 390 |
+
C. PAPER OUTLINE
|
| 391 |
+
The remainder of this paper is organized as follows. In Sec-
|
| 392 |
+
tion II, we introduce the system model of the multi-cell
|
| 393 |
+
massive MIMO system, including the description of the uplink
|
| 394 |
+
training and a unified framework for the data transmission
|
| 395 |
+
stage accounting for both puncturing and superposition cod-
|
| 396 |
+
ing techniques. In Section III we present the information-
|
| 397 |
+
theoretic analyses in the infinite-blocklength regime and finite-
|
| 398 |
+
blocklength regime for the eMBB and the URLLC perfor-
|
| 399 |
+
mance evaluation, respectively. Section IV details the precod-
|
| 400 |
+
ing techniques and power allocation strategies to deal with the
|
| 401 |
+
coexistence of eMBB and URLLC users. Simulation results
|
| 402 |
+
and discussions are provided in Section V, while the main
|
| 403 |
+
findings of this work are discussed in Section VI.
|
| 404 |
+
D. NOTATION
|
| 405 |
+
Vectors and matrices are denoted by boldface lowercase and
|
| 406 |
+
boldface uppercase letters, respectively. Calligraphy uppercase
|
| 407 |
+
letters denote sets, while C and R represent the sets of complex
|
| 408 |
+
and real numbers, respectively. E {·} indicates the expectation
|
| 409 |
+
operator, while Pr {·} denotes the probability of a set. x+
|
| 410 |
+
represents the positive part function, namely x+ =max{x, 0},
|
| 411 |
+
and ⌊·⌋ denotes the floor function. The natural logarithm
|
| 412 |
+
is indicated by log(·) and Q(·) describes the Gaussian Q-
|
| 413 |
+
function. CN (µ, Σ) describes a circularly symmetric complex
|
| 414 |
+
Gaussian distribution with mean µ and covariance matrix Σ.
|
| 415 |
+
The superscripts (·)T, (·)∗ and (·)H denote the transpose, the
|
| 416 |
+
conjugate and the conjugate transpose (Hermitian) operators,
|
| 417 |
+
respectively. tr(A) indicates the trace of the matrix A, while
|
| 418 |
+
∥a∥ denotes the ℓ2-norm of the vector a. The notation [A]:,i
|
| 419 |
+
indicates the ith column of the matrix A. IN represents the
|
| 420 |
+
identity matrix of size N×N. Table II introduces the notation
|
| 421 |
+
definition used in the system model of this paper.
|
| 422 |
+
II. SYSTEM MODEL
|
| 423 |
+
Let us consider a multi-cell massive MIMO system with
|
| 424 |
+
L cells, each one served by a BS that is placed at the cell-
|
| 425 |
+
center and equipped with M co-located antennas. Each cell
|
| 426 |
+
TABLE II
|
| 427 |
+
SYSTEM MODEL NOTATION
|
| 428 |
+
Symbol Description
|
| 429 |
+
Symbol Description
|
| 430 |
+
L
|
| 431 |
+
n. of cells
|
| 432 |
+
K
|
| 433 |
+
n. of users/cell
|
| 434 |
+
M
|
| 435 |
+
n. of BS antennas
|
| 436 |
+
Ku
|
| 437 |
+
n. of URLLC users/cell
|
| 438 |
+
α
|
| 439 |
+
Ku/K ∈ (0, 1)
|
| 440 |
+
Ke
|
| 441 |
+
n. of eMBB users/cell
|
| 442 |
+
τc
|
| 443 |
+
TDD frame length
|
| 444 |
+
Ku
|
| 445 |
+
j
|
| 446 |
+
URLLC users set in cell j
|
| 447 |
+
τp
|
| 448 |
+
UL training length
|
| 449 |
+
Ke
|
| 450 |
+
j
|
| 451 |
+
eMBB users set in cell j
|
| 452 |
+
τd
|
| 453 |
+
DL data trans. length
|
| 454 |
+
T
|
| 455 |
+
n. of slots in a TDD frame
|
| 456 |
+
hj
|
| 457 |
+
lk
|
| 458 |
+
channel between BS j and user k in cell l, vector CM
|
| 459 |
+
�hj
|
| 460 |
+
lk
|
| 461 |
+
estimate of hj
|
| 462 |
+
lk
|
| 463 |
+
�hj
|
| 464 |
+
lk
|
| 465 |
+
estimation error hj
|
| 466 |
+
lk−�hj
|
| 467 |
+
lk
|
| 468 |
+
Rj
|
| 469 |
+
lk
|
| 470 |
+
correl. matrix of hj
|
| 471 |
+
lk
|
| 472 |
+
βj
|
| 473 |
+
lk
|
| 474 |
+
average channel gain of �hj
|
| 475 |
+
lk
|
| 476 |
+
Cj
|
| 477 |
+
lk
|
| 478 |
+
correl. matrix of �hj
|
| 479 |
+
lk
|
| 480 |
+
f
|
| 481 |
+
pilot reuse factor
|
| 482 |
+
pp
|
| 483 |
+
jk
|
| 484 |
+
UL pilot power
|
| 485 |
+
ρmax
|
| 486 |
+
j
|
| 487 |
+
max transmit power at BS j
|
| 488 |
+
Pjk
|
| 489 |
+
set of all the users using the same pilot as user k in cell j
|
| 490 |
+
At
|
| 491 |
+
jk
|
| 492 |
+
1 if URLLC user k in cell j is active in slot t, 0 otherwise
|
| 493 |
+
au
|
| 494 |
+
parameter of the Bernoulli distribution that draws At
|
| 495 |
+
jk
|
| 496 |
+
ςe
|
| 497 |
+
jk[n]
|
| 498 |
+
data transmitted by BS j to eMBB user k in channel use n
|
| 499 |
+
ςu
|
| 500 |
+
ji[n]
|
| 501 |
+
data transmitted by BS j to URLLC user i in channel use n
|
| 502 |
+
wjk
|
| 503 |
+
precoding vector, CM, used by BS j to its user k
|
| 504 |
+
σ2
|
| 505 |
+
u
|
| 506 |
+
UL noise variance
|
| 507 |
+
ρu
|
| 508 |
+
ji
|
| 509 |
+
DL power to URLLC user i
|
| 510 |
+
σ2
|
| 511 |
+
d
|
| 512 |
+
DL noise variance
|
| 513 |
+
ρe
|
| 514 |
+
jk
|
| 515 |
+
DL power to eMBB user k
|
| 516 |
+
gli
|
| 517 |
+
jk
|
| 518 |
+
precoded DL channel from BS l using wli to user k in cell j
|
| 519 |
+
�gli
|
| 520 |
+
jk
|
| 521 |
+
estimate of gli
|
| 522 |
+
jk
|
| 523 |
+
nd
|
| 524 |
+
URLLC codeword length
|
| 525 |
+
ϵdl
|
| 526 |
+
jk
|
| 527 |
+
DL error probability
|
| 528 |
+
ηdl
|
| 529 |
+
DL network availability
|
| 530 |
+
ν
|
| 531 |
+
exponent characterizing the fractional power allocation (FPA)
|
| 532 |
+
ω
|
| 533 |
+
FPA weight tuning the power allocated to the URLLC users
|
| 534 |
+
covers a square area of D × D km2, and provide service
|
| 535 |
+
to K users. It holds that M ≫ K so that interference
|
| 536 |
+
suppression can be efficiently carried out by exploiting the
|
| 537 |
+
spatial degrees of freedom. A fraction 0≤α≤1 of the K users
|
| 538 |
+
requests a URLLC service, e.g., a vehicle in cellular vehicle-
|
| 539 |
+
to-everything (C-V2X) use cases for intelligent transportation
|
| 540 |
+
systems, or a machine in factory automation use cases for
|
| 541 |
+
“Industry 4.0”. Letting Ku = αK be the number of URLLC
|
| 542 |
+
users per cell, then Ke = K −Ku is the number of eMBB
|
| 543 |
+
users per cell. The set including the indices of the eMBB and
|
| 544 |
+
URLLC users in cell j is denoted as Ke
|
| 545 |
+
j and Ku
|
| 546 |
+
j , respectively.
|
| 547 |
+
A. TDD PROTOCOL AND FRAME STRUCTURE
|
| 548 |
+
The considered system operates in time-division duplex
|
| 549 |
+
(TDD) mode to facilitate CSI acquisition and limit the es-
|
| 550 |
+
timation overhead. In addition, we assume that the channel is
|
| 551 |
+
reciprocal as a result of a perfect calibration of the RF chains.
|
| 552 |
+
By leveraging the channel reciprocity, the channel estimates
|
| 553 |
+
acquired by the BS in the uplink are then utilized in the
|
| 554 |
+
downlink to design the transmit precoding vectors. As channel
|
| 555 |
+
hardening holds for co-located massive MIMO systems with
|
| 556 |
+
sufficiently large antenna arrays in most of the propagation
|
| 557 |
+
environments, we assume that the users do not estimate the
|
| 558 |
+
downlink channels, and reliably decode downlink data solely
|
| 559 |
+
relying on the knowledge of the statistical CSI. Hence, the
|
| 560 |
+
TDD protocol consists of three phases: (i) pilot-based uplink
|
| 561 |
+
training, (ii) uplink data transmission, and (iii) downlink data
|
| 562 |
+
transmission.
|
| 563 |
+
The time-frequency resources are structured in TDD frames,
|
| 564 |
+
each one grouping a set of subcarriers and time samples over
|
| 565 |
+
which the channel response is assumed being frequency-flat
|
| 566 |
+
and time-invariant. The TDD frame must accommodate the
|
| 567 |
+
|
| 568 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 569 |
+
6
|
| 570 |
+
URLLC
|
| 571 |
+
eMBB
|
| 572 |
+
URLLC
|
| 573 |
+
eMBB
|
| 574 |
+
PUNC
|
| 575 |
+
or
|
| 576 |
+
=
|
| 577 |
+
+
|
| 578 |
+
SPC
|
| 579 |
+
one slot, nd
|
| 580 |
+
τd
|
| 581 |
+
DL Data Transmission
|
| 582 |
+
UL training
|
| 583 |
+
τp
|
| 584 |
+
Coherence
|
| 585 |
+
bandwidth
|
| 586 |
+
Bc
|
| 587 |
+
one channel use
|
| 588 |
+
Coherence time, Tc
|
| 589 |
+
Fig. 1. An illustration of the TDD frame assuming no uplink data transmission
|
| 590 |
+
phase, and representing the resource allocation in case of puncturing (PUNC)
|
| 591 |
+
and superposition coding (SPC) operation.
|
| 592 |
+
aforementioned protocol phases and supporting all the users,
|
| 593 |
+
thus its size is designed to match that of the smallest user’s
|
| 594 |
+
coherence block in the network. As shown in Fig. 1, the
|
| 595 |
+
TDD frame consists of τc = TcBc samples (or channel uses)
|
| 596 |
+
where Tc is the coherence time and Bc is the coherence
|
| 597 |
+
bandwidth. τp channel uses out of τc are spent for the uplink
|
| 598 |
+
CSI acquisition, whereas the remaining channel uses are
|
| 599 |
+
devoted to the uplink and downlink data transmission. Since,
|
| 600 |
+
in this paper, we only focus on the downlink operation, we
|
| 601 |
+
assume that τd = τc −τp is the length of the downlink data
|
| 602 |
+
transmission phase, without loss of generality. The latter is
|
| 603 |
+
divided in T slots of equal length. As conventionally assumed
|
| 604 |
+
in the ergodic regime, an eMBB transmission spans multiple
|
| 605 |
+
(theoretically an infinite number of) TDD frames, wherein
|
| 606 |
+
the channel realizations evolve independently according to
|
| 607 |
+
the block-fading model. To evaluate the spectral efficiency
|
| 608 |
+
achieved by the eMBB users, we look at a single TDD frame
|
| 609 |
+
and resort to the information-theoretic bounds and tools in
|
| 610 |
+
the infinite-blocklength regime [4], [5]. Whereas, URLLC
|
| 611 |
+
transmissions are confined in time to meet the very strict
|
| 612 |
+
latency requirements and are allowed to span only one slot.
|
| 613 |
+
Hence, the number of channel uses in a slot equals the URLLC
|
| 614 |
+
codeword length. We assume a random activation pattern of
|
| 615 |
+
the URLLC users. Within a TDD frame, a URLLC user may
|
| 616 |
+
be active in multiple slots. To characterize the error probability
|
| 617 |
+
of the URLLC transmissions, we look separately at each
|
| 618 |
+
single slot of a TDD frame and resort to the finite-blocklength
|
| 619 |
+
information-theoretic bounds and tools presented in [8].
|
| 620 |
+
B. CHANNEL MODEL AND UPLINK TRAINING
|
| 621 |
+
The channel response between the k-th user in cell l and
|
| 622 |
+
the BS in cell j is denoted by the M-dimensional complex-
|
| 623 |
+
valued vector hj
|
| 624 |
+
lk. We assume correlated Rayleigh fading
|
| 625 |
+
channels, that is hj
|
| 626 |
+
lk ∼ CN
|
| 627 |
+
�
|
| 628 |
+
0M, Rj
|
| 629 |
+
lk
|
| 630 |
+
�
|
| 631 |
+
, where Rj
|
| 632 |
+
lk ∈ CM×M
|
| 633 |
+
is the positive semi-definite spatial correlation matrix. The
|
| 634 |
+
corresponding average channel gain (or large-scale fading
|
| 635 |
+
coefficient) is given by βj
|
| 636 |
+
lk = tr(Rj
|
| 637 |
+
lk)/M. Large-scale fading
|
| 638 |
+
quantities are assumed to be known at the BS.
|
| 639 |
+
In the uplink training phase, each user transmits a pilot
|
| 640 |
+
sequence that spans τp channel uses. The pilot sequence of
|
| 641 |
+
user k in cell j is denoted by φjk ∈ Cτp. All the pilot
|
| 642 |
+
sequences are drawn from a set of τp mutually orthogonal
|
| 643 |
+
pilots, thereby the inner product between two pilots equals
|
| 644 |
+
either τp if the sequences are identical or 0 if they are mutually
|
| 645 |
+
orthogonal. Notice that re-using the pilots throughout the
|
| 646 |
+
network might be unavoidable as the share of the TDD frame
|
| 647 |
+
reserved to the training is limited and, importantly, as the
|
| 648 |
+
CSI acquisition overhead significantly degrades the spectral
|
| 649 |
+
efficiency. Pilot reuse gives rise to additional interference,
|
| 650 |
+
known as pilot contamination [3], that degrades the quality
|
| 651 |
+
of the acquired CSI and correlates the channel estimates.
|
| 652 |
+
The cumulative uplink signal received at BS j, denoted by
|
| 653 |
+
Yp
|
| 654 |
+
j ∈ CM×τp, reads
|
| 655 |
+
Yp
|
| 656 |
+
j =
|
| 657 |
+
K
|
| 658 |
+
�
|
| 659 |
+
k=1
|
| 660 |
+
�
|
| 661 |
+
pp
|
| 662 |
+
jkhj
|
| 663 |
+
jkφT
|
| 664 |
+
jk +
|
| 665 |
+
L
|
| 666 |
+
�
|
| 667 |
+
l=1
|
| 668 |
+
l̸=j
|
| 669 |
+
K
|
| 670 |
+
�
|
| 671 |
+
i=1
|
| 672 |
+
�
|
| 673 |
+
pp
|
| 674 |
+
lihj
|
| 675 |
+
liφT
|
| 676 |
+
li + Np
|
| 677 |
+
j ,
|
| 678 |
+
(1)
|
| 679 |
+
where pp
|
| 680 |
+
jk is the transmit pilot power, and Np
|
| 681 |
+
j is the additive
|
| 682 |
+
receiver noise with i.i.d. elements distributed as CN
|
| 683 |
+
�
|
| 684 |
+
0, σ2
|
| 685 |
+
u
|
| 686 |
+
�
|
| 687 |
+
,
|
| 688 |
+
with σ2
|
| 689 |
+
u being the receiver noise variance in the uplink. To
|
| 690 |
+
estimate the channel of user k in its own cell, hj
|
| 691 |
+
jk, BS j
|
| 692 |
+
correlates Yp
|
| 693 |
+
j with the known pilot sequence φjk as
|
| 694 |
+
yp
|
| 695 |
+
jjk =Yp
|
| 696 |
+
jφ∗
|
| 697 |
+
jk
|
| 698 |
+
=
|
| 699 |
+
�
|
| 700 |
+
pp
|
| 701 |
+
jkτphj
|
| 702 |
+
jk +
|
| 703 |
+
K
|
| 704 |
+
�
|
| 705 |
+
i=1
|
| 706 |
+
i̸=k
|
| 707 |
+
�
|
| 708 |
+
pp
|
| 709 |
+
jihj
|
| 710 |
+
jiφT
|
| 711 |
+
jiφ∗
|
| 712 |
+
jk
|
| 713 |
+
+
|
| 714 |
+
L
|
| 715 |
+
�
|
| 716 |
+
l=1
|
| 717 |
+
l̸=j
|
| 718 |
+
K
|
| 719 |
+
�
|
| 720 |
+
i=1
|
| 721 |
+
�
|
| 722 |
+
pp
|
| 723 |
+
lihj
|
| 724 |
+
liφT
|
| 725 |
+
liφ∗
|
| 726 |
+
jk + Np
|
| 727 |
+
jφ∗
|
| 728 |
+
jk .
|
| 729 |
+
(2)
|
| 730 |
+
In (2), the second term of the rightmost right-hand side
|
| 731 |
+
represents the intra-cell pilot contamination term, while the
|
| 732 |
+
third term quantifies the inter-cell pilot contamination. A
|
| 733 |
+
conventional pilot allocation strategy consists in assigning
|
| 734 |
+
mutually orthogonal pilots to users within the same cell, and
|
| 735 |
+
re-using the pilot sequences over different cells [5]. This
|
| 736 |
+
is a reasonable choice as intra-cell pilot contamination is
|
| 737 |
+
presumably stronger than inter-cell pilot contamination. We
|
| 738 |
+
let τp = fK where f is referred to as pilot reuse factor.
|
| 739 |
+
Importantly, in order not to jeopardize the ultra-reliability of
|
| 740 |
+
the URLLC transmissions, we assume that unique orthogonal
|
| 741 |
+
pilot sequences are assigned to all the URLLC users in the
|
| 742 |
+
network, if doable (namely when τp >LKe). Summarizing, the
|
| 743 |
+
pilot allocation strategy we propose primarily aims to prevent
|
| 744 |
+
URLLC users from being affected of pilot contamination, and
|
| 745 |
+
secondarily to prevent intra-cell pilot contamination. Finally,
|
| 746 |
+
if τp is sufficiently large, that is τp ≥ LK, then mutually
|
| 747 |
+
orthogonal pilots can be guaranteed to everyone. Let us define
|
| 748 |
+
the set
|
| 749 |
+
Pjk =
|
| 750 |
+
�
|
| 751 |
+
(l, i) : φli =φjk, l=1, . . . , L, i=1, . . . , K
|
| 752 |
+
�
|
| 753 |
+
,
|
| 754 |
+
(3)
|
| 755 |
+
including the indices of all the users (and of the corresponding
|
| 756 |
+
cells) that use the same pilot as user k in cell j. Hence, we
|
| 757 |
+
can rewrite (2) as
|
| 758 |
+
yp
|
| 759 |
+
jjk =
|
| 760 |
+
�
|
| 761 |
+
pp
|
| 762 |
+
jkτphj
|
| 763 |
+
jk + τp
|
| 764 |
+
�
|
| 765 |
+
(l,i)∈Pjk\(j,k)
|
| 766 |
+
�
|
| 767 |
+
pp
|
| 768 |
+
lihj
|
| 769 |
+
li + Np
|
| 770 |
+
jφ∗
|
| 771 |
+
jk.
|
| 772 |
+
(4)
|
| 773 |
+
|
| 774 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 775 |
+
7
|
| 776 |
+
The processed uplink signal, yp
|
| 777 |
+
jjk, is a sufficient statistic for
|
| 778 |
+
the estimation of hj
|
| 779 |
+
jk. Upon the knowledge of the spatial
|
| 780 |
+
correlation matrices, BS j can compute the minimum mean-
|
| 781 |
+
squared error (MMSE) estimate of hj
|
| 782 |
+
jk, denoted by �hj
|
| 783 |
+
jk, based
|
| 784 |
+
on the observation yp
|
| 785 |
+
jjk as [5]
|
| 786 |
+
�hj
|
| 787 |
+
jk =
|
| 788 |
+
�
|
| 789 |
+
pp
|
| 790 |
+
jkRj
|
| 791 |
+
jkΨj
|
| 792 |
+
jkyp
|
| 793 |
+
jjk
|
| 794 |
+
(5)
|
| 795 |
+
where
|
| 796 |
+
Ψj
|
| 797 |
+
jk =
|
| 798 |
+
�
|
| 799 |
+
�
|
| 800 |
+
�
|
| 801 |
+
(l,i)∈Pjk
|
| 802 |
+
pp
|
| 803 |
+
liτpRj
|
| 804 |
+
li + σ2
|
| 805 |
+
ulIMj
|
| 806 |
+
�
|
| 807 |
+
�
|
| 808 |
+
−1
|
| 809 |
+
.
|
| 810 |
+
(6)
|
| 811 |
+
The estimation error is given by �hj
|
| 812 |
+
jk = hj
|
| 813 |
+
jk − �hj
|
| 814 |
+
jk, and has
|
| 815 |
+
correlation matrix
|
| 816 |
+
Cj
|
| 817 |
+
jk =E
|
| 818 |
+
�
|
| 819 |
+
�hj
|
| 820 |
+
jk(�hj
|
| 821 |
+
jk)H�
|
| 822 |
+
= Rj
|
| 823 |
+
jk−pp
|
| 824 |
+
jkτpRj
|
| 825 |
+
jkΨj
|
| 826 |
+
jkRj
|
| 827 |
+
jk.
|
| 828 |
+
It follows that �hj
|
| 829 |
+
jk and �hj
|
| 830 |
+
jk are independent random variables
|
| 831 |
+
distributed as
|
| 832 |
+
�hj
|
| 833 |
+
jk ∼ CN
|
| 834 |
+
�
|
| 835 |
+
0M, Cj
|
| 836 |
+
jk
|
| 837 |
+
�
|
| 838 |
+
,
|
| 839 |
+
�hj
|
| 840 |
+
jk ∼ CN
|
| 841 |
+
�
|
| 842 |
+
0M, Rj
|
| 843 |
+
jk−Cj
|
| 844 |
+
jk
|
| 845 |
+
�
|
| 846 |
+
.
|
| 847 |
+
C. DOWNLINK TRANSMISSION
|
| 848 |
+
In the downlink transmission phase, each BS transmits
|
| 849 |
+
payload data to all the active users of its cell. Let At
|
| 850 |
+
jk be
|
| 851 |
+
a coefficient that equals 1 if a URLLC transmission takes
|
| 852 |
+
place at the t-th slot for URLLC user k in cell j, and
|
| 853 |
+
0 otherwise. This coefficient models the random activation
|
| 854 |
+
pattern of the URLLC users which follows a Bernoulli dis-
|
| 855 |
+
tribution with parameter au, At
|
| 856 |
+
jk ∼ Bern(au). To handle the
|
| 857 |
+
coexistence of eMBB and URLLC users in the downlink, we
|
| 858 |
+
consider two transmission techniques: (i) puncturing, and (ii)
|
| 859 |
+
superposition coding. Under puncturing, whenever a URLLC
|
| 860 |
+
transmission is triggered by a BS in a certain slot, all the
|
| 861 |
+
eMBB transmissions therein are dropped. However, the eMBB
|
| 862 |
+
service can be still guaranteed in the remaining slots of the
|
| 863 |
+
frame where no URLLC users are active. Under superposition
|
| 864 |
+
coding, eMBB transmissions occur in all the slots and each
|
| 865 |
+
BS linearly combines eMBB and URLLC signals whenever
|
| 866 |
+
URLLC transmissions are triggered.
|
| 867 |
+
The analytical framework detailed next is generalized,
|
| 868 |
+
namely holds for both the aforementioned transmission tech-
|
| 869 |
+
niques upon setting, for an arbitrary BS j and slot t, the
|
| 870 |
+
coefficient
|
| 871 |
+
˜At
|
| 872 |
+
j =
|
| 873 |
+
�
|
| 874 |
+
�
|
| 875 |
+
�
|
| 876 |
+
�
|
| 877 |
+
�
|
| 878 |
+
�
|
| 879 |
+
�
|
| 880 |
+
�
|
| 881 |
+
1 − �
|
| 882 |
+
i∈Ku
|
| 883 |
+
j
|
| 884 |
+
At
|
| 885 |
+
ji
|
| 886 |
+
�+
|
| 887 |
+
,
|
| 888 |
+
for puncturing,
|
| 889 |
+
1 ,
|
| 890 |
+
for superposition coding.
|
| 891 |
+
Let ςe
|
| 892 |
+
jk[n] or ςu
|
| 893 |
+
jk[n] be the data symbol transmitted by BS
|
| 894 |
+
j to user k over an arbitrary channel use n, if k is an
|
| 895 |
+
eMBB user or a URLLC user, respectively. We assume that
|
| 896 |
+
ςs
|
| 897 |
+
jk[n] ∼ CN (0, 1), with s = {e, u}. A slot consists of nd
|
| 898 |
+
channel uses, with nd =⌊τd/T⌋, and equals the length of the
|
| 899 |
+
URLLC codeword. The data symbol is precoded by using the
|
| 900 |
+
M-dimensional precoding vector wjk, which is function of
|
| 901 |
+
the CSI acquired at the BS during the uplink training. It also
|
| 902 |
+
holds E
|
| 903 |
+
�
|
| 904 |
+
∥wjk∥2�
|
| 905 |
+
= 1. The data signal transmitted by BS j
|
| 906 |
+
over an arbitrary channel use n of slot t is given by
|
| 907 |
+
xt
|
| 908 |
+
j[n] = ˜At
|
| 909 |
+
j
|
| 910 |
+
�
|
| 911 |
+
k∈Ke
|
| 912 |
+
j
|
| 913 |
+
�
|
| 914 |
+
ρe
|
| 915 |
+
jkwjkςe
|
| 916 |
+
jk[n]+
|
| 917 |
+
�
|
| 918 |
+
i∈Ku
|
| 919 |
+
j
|
| 920 |
+
At
|
| 921 |
+
ji
|
| 922 |
+
�
|
| 923 |
+
ρu
|
| 924 |
+
jiwjiςu
|
| 925 |
+
ji[n],
|
| 926 |
+
(7)
|
| 927 |
+
with n = 1, . . . , nd, and where ρe
|
| 928 |
+
jk and ρu
|
| 929 |
+
ji are the downlink
|
| 930 |
+
transmit powers used by BS j to its eMBB user k and URLLC
|
| 931 |
+
user i, respectively, satisfying the following per-BS power
|
| 932 |
+
constraint
|
| 933 |
+
E
|
| 934 |
+
���xt
|
| 935 |
+
j[n]
|
| 936 |
+
��2�
|
| 937 |
+
= ˜At
|
| 938 |
+
j
|
| 939 |
+
�
|
| 940 |
+
k∈Ke
|
| 941 |
+
j
|
| 942 |
+
ρe
|
| 943 |
+
jk+
|
| 944 |
+
�
|
| 945 |
+
i∈Ku
|
| 946 |
+
j
|
| 947 |
+
At
|
| 948 |
+
jiρu
|
| 949 |
+
ji ≤ ρmax
|
| 950 |
+
j
|
| 951 |
+
,
|
| 952 |
+
(8)
|
| 953 |
+
with j = 1, . . . , L, and where ρmax
|
| 954 |
+
j
|
| 955 |
+
is the maximum transmit
|
| 956 |
+
power at BS j. The data signal received at user k in cell
|
| 957 |
+
j over an arbitrary channel use n of slot t is denoted as
|
| 958 |
+
yt,s
|
| 959 |
+
jk[n], with s = {e, u}. In line with the conventional massive
|
| 960 |
+
MIMO operation, we assume that the users do not acquire
|
| 961 |
+
the instantaneous downlink CSI, but rather rely on a mean
|
| 962 |
+
value approximation of their downlink precoded channels.
|
| 963 |
+
Such approximation is accurate if channel hardening occurs.
|
| 964 |
+
If user k in cell j is an eMBB user, namely k ∈ Ke
|
| 965 |
+
j, then
|
| 966 |
+
its received data signal over an arbitrary channel use n of
|
| 967 |
+
slot t can be written as in (9) at the top of the next page,
|
| 968 |
+
where wjk[n] ∼ CN
|
| 969 |
+
�
|
| 970 |
+
0, σ2
|
| 971 |
+
d
|
| 972 |
+
�
|
| 973 |
+
is the i.i.d. receiver noise with
|
| 974 |
+
variance σ2
|
| 975 |
+
d, and we have defined gli
|
| 976 |
+
jk =(hl
|
| 977 |
+
jk)Hwli, namely the
|
| 978 |
+
precoded downlink (scalar) channel between the BS in cell l,
|
| 979 |
+
using the precoding vector intended for its user i, and the k-th
|
| 980 |
+
user in cell j. If user k in cell j is a URLLC user, its received
|
| 981 |
+
data signal over an arbitrary channel use n in slot t can be
|
| 982 |
+
written as in (10) at the top of the next page. Equation (9)
|
| 983 |
+
emphasizes the fact that user k in cell j solely knows the
|
| 984 |
+
statistical CSI of the downlink channel, that is E
|
| 985 |
+
�
|
| 986 |
+
gjk
|
| 987 |
+
jk
|
| 988 |
+
�
|
| 989 |
+
. The
|
| 990 |
+
second term in (9) represents the self-interference due to this
|
| 991 |
+
lack of instantaneous CSI, referred to as beamforming gain
|
| 992 |
+
uncertainty. Going forward, the intra-cell inter-service inter-
|
| 993 |
+
ference and intra-cell intra-service interference terms represent
|
| 994 |
+
the interference caused by the URLLC and eMBB users of
|
| 995 |
+
cell j, respectively. This is presumably stronger than the inter-
|
| 996 |
+
cell interference caused by the eMBB users (i.e., intra-service)
|
| 997 |
+
and the URLLC users (i.e., inter-service) in the other cells.
|
| 998 |
+
A similar distinction of the various signal contributions is
|
| 999 |
+
reported in (10) for URLLC user k in cell j. In this case,
|
| 1000 |
+
the lack of instantaneous CSI at the user will be highlighted
|
| 1001 |
+
in the next section.
|
| 1002 |
+
III. PERFORMANCE ANALYSIS
|
| 1003 |
+
In this section, we evaluate the downlink performance
|
| 1004 |
+
of eMBB and URLLC users. As per the eMBB users, we
|
| 1005 |
+
consider the spectral efficiency (SE) by applying the infinite-
|
| 1006 |
+
blocklength information-theoretic results established in the
|
| 1007 |
+
ergodic regime [4], [5], [29]. An achievable downlink SE,
|
| 1008 |
+
namely a lower-bound on the ergodic downlink capacity,
|
| 1009 |
+
can be obtained by applying the popular hardening bound
|
| 1010 |
+
technique [4], [5] on the signal model in (9), by treating all
|
| 1011 |
+
the interference sources as uncorrelated noise. Specifically, an
|
| 1012 |
+
|
| 1013 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 1014 |
+
8
|
| 1015 |
+
yt,e
|
| 1016 |
+
jk [n] = E
|
| 1017 |
+
�
|
| 1018 |
+
gjk
|
| 1019 |
+
jk
|
| 1020 |
+
�
|
| 1021 |
+
˜At
|
| 1022 |
+
j
|
| 1023 |
+
�
|
| 1024 |
+
ρe
|
| 1025 |
+
jkςe
|
| 1026 |
+
jk[n]
|
| 1027 |
+
�
|
| 1028 |
+
��
|
| 1029 |
+
�
|
| 1030 |
+
desired signal
|
| 1031 |
+
+
|
| 1032 |
+
�
|
| 1033 |
+
gjk
|
| 1034 |
+
jk −E
|
| 1035 |
+
�
|
| 1036 |
+
gjk
|
| 1037 |
+
jk
|
| 1038 |
+
��
|
| 1039 |
+
˜At
|
| 1040 |
+
j
|
| 1041 |
+
�
|
| 1042 |
+
ρe
|
| 1043 |
+
jkςe
|
| 1044 |
+
jk[n]
|
| 1045 |
+
�
|
| 1046 |
+
��
|
| 1047 |
+
�
|
| 1048 |
+
self-interference
|
| 1049 |
+
+
|
| 1050 |
+
�
|
| 1051 |
+
i∈Ku
|
| 1052 |
+
j
|
| 1053 |
+
gji
|
| 1054 |
+
jkAt
|
| 1055 |
+
ji
|
| 1056 |
+
�
|
| 1057 |
+
ρu
|
| 1058 |
+
jiςu
|
| 1059 |
+
ji[n]
|
| 1060 |
+
�
|
| 1061 |
+
��
|
| 1062 |
+
�
|
| 1063 |
+
intra-cell inter-service interference
|
| 1064 |
+
+
|
| 1065 |
+
�
|
| 1066 |
+
i∈Ke
|
| 1067 |
+
j\{k}
|
| 1068 |
+
gji
|
| 1069 |
+
jk ˜At
|
| 1070 |
+
j
|
| 1071 |
+
�
|
| 1072 |
+
ρe
|
| 1073 |
+
jiςe
|
| 1074 |
+
ji[n]
|
| 1075 |
+
�
|
| 1076 |
+
��
|
| 1077 |
+
�
|
| 1078 |
+
intra-cell intra-service interference
|
| 1079 |
+
+
|
| 1080 |
+
L
|
| 1081 |
+
�
|
| 1082 |
+
l=1
|
| 1083 |
+
l̸=j
|
| 1084 |
+
�
|
| 1085 |
+
i∈Ke
|
| 1086 |
+
l
|
| 1087 |
+
gli
|
| 1088 |
+
jk ˜At
|
| 1089 |
+
l
|
| 1090 |
+
�
|
| 1091 |
+
ρe
|
| 1092 |
+
liςe
|
| 1093 |
+
li[n]
|
| 1094 |
+
�
|
| 1095 |
+
��
|
| 1096 |
+
�
|
| 1097 |
+
inter-cell intra-service interference
|
| 1098 |
+
+
|
| 1099 |
+
L
|
| 1100 |
+
�
|
| 1101 |
+
l=1
|
| 1102 |
+
l̸=j
|
| 1103 |
+
�
|
| 1104 |
+
i∈Ku
|
| 1105 |
+
l
|
| 1106 |
+
gli
|
| 1107 |
+
jkAt
|
| 1108 |
+
li
|
| 1109 |
+
�
|
| 1110 |
+
ρu
|
| 1111 |
+
liςu
|
| 1112 |
+
li[n]
|
| 1113 |
+
�
|
| 1114 |
+
��
|
| 1115 |
+
�
|
| 1116 |
+
inter-cell inter-service interference
|
| 1117 |
+
+ wjk[n]
|
| 1118 |
+
� �� �
|
| 1119 |
+
noise
|
| 1120 |
+
(9)
|
| 1121 |
+
yt,u
|
| 1122 |
+
jk [n]=gjk
|
| 1123 |
+
jkAt
|
| 1124 |
+
jk
|
| 1125 |
+
�
|
| 1126 |
+
ρu
|
| 1127 |
+
jkςu
|
| 1128 |
+
jk[n]
|
| 1129 |
+
�
|
| 1130 |
+
��
|
| 1131 |
+
�
|
| 1132 |
+
desired signal
|
| 1133 |
+
+
|
| 1134 |
+
�
|
| 1135 |
+
i∈Ku
|
| 1136 |
+
j \{k}
|
| 1137 |
+
gji
|
| 1138 |
+
jkAt
|
| 1139 |
+
ji
|
| 1140 |
+
�
|
| 1141 |
+
ρu
|
| 1142 |
+
jiςu
|
| 1143 |
+
ji[n]
|
| 1144 |
+
�
|
| 1145 |
+
��
|
| 1146 |
+
�
|
| 1147 |
+
intra-cell intra-service interference
|
| 1148 |
+
+
|
| 1149 |
+
L
|
| 1150 |
+
�
|
| 1151 |
+
l=1
|
| 1152 |
+
l̸=j
|
| 1153 |
+
�
|
| 1154 |
+
i∈Ku
|
| 1155 |
+
l
|
| 1156 |
+
gli
|
| 1157 |
+
jkAt
|
| 1158 |
+
li
|
| 1159 |
+
�
|
| 1160 |
+
ρu
|
| 1161 |
+
liςu
|
| 1162 |
+
li[n]
|
| 1163 |
+
�
|
| 1164 |
+
��
|
| 1165 |
+
�
|
| 1166 |
+
inter-cell intra-service interference
|
| 1167 |
+
+
|
| 1168 |
+
L
|
| 1169 |
+
�
|
| 1170 |
+
l=1
|
| 1171 |
+
�
|
| 1172 |
+
i∈Ke
|
| 1173 |
+
l
|
| 1174 |
+
gli
|
| 1175 |
+
jk ˜At
|
| 1176 |
+
l
|
| 1177 |
+
�
|
| 1178 |
+
ρe
|
| 1179 |
+
liςe
|
| 1180 |
+
li[n]
|
| 1181 |
+
�
|
| 1182 |
+
��
|
| 1183 |
+
�
|
| 1184 |
+
inter-service interference
|
| 1185 |
+
+ wjk[n]
|
| 1186 |
+
� �� �
|
| 1187 |
+
noise
|
| 1188 |
+
(10)
|
| 1189 |
+
achievable downlink spectral efficiency of an arbitrary eMBB
|
| 1190 |
+
user k in cell j, is given by
|
| 1191 |
+
SEe
|
| 1192 |
+
jk = τd
|
| 1193 |
+
τc
|
| 1194 |
+
1
|
| 1195 |
+
T
|
| 1196 |
+
T
|
| 1197 |
+
�
|
| 1198 |
+
t=1
|
| 1199 |
+
log2(1 + SINRt,e
|
| 1200 |
+
jk), [bits/s/Hz] ,
|
| 1201 |
+
(11)
|
| 1202 |
+
where τd/τc accounts for the estimation overhead,
|
| 1203 |
+
SINRt,e
|
| 1204 |
+
jk =
|
| 1205 |
+
˜At
|
| 1206 |
+
jρe
|
| 1207 |
+
jk
|
| 1208 |
+
���E
|
| 1209 |
+
�
|
| 1210 |
+
gjk
|
| 1211 |
+
jk
|
| 1212 |
+
����
|
| 1213 |
+
2
|
| 1214 |
+
L
|
| 1215 |
+
�
|
| 1216 |
+
l=1
|
| 1217 |
+
K
|
| 1218 |
+
�
|
| 1219 |
+
i=1
|
| 1220 |
+
ϱt
|
| 1221 |
+
li E
|
| 1222 |
+
�
|
| 1223 |
+
|gli
|
| 1224 |
+
jk|2
|
| 1225 |
+
�
|
| 1226 |
+
− ˜At
|
| 1227 |
+
jρe
|
| 1228 |
+
jk
|
| 1229 |
+
���E
|
| 1230 |
+
�
|
| 1231 |
+
gjk
|
| 1232 |
+
jk
|
| 1233 |
+
����
|
| 1234 |
+
2
|
| 1235 |
+
+σ2
|
| 1236 |
+
d
|
| 1237 |
+
,
|
| 1238 |
+
(12)
|
| 1239 |
+
is the effective SINR of user k ∈ Ke
|
| 1240 |
+
j, where the expectations
|
| 1241 |
+
are taken with respect to the random channel realizations, and
|
| 1242 |
+
ϱt
|
| 1243 |
+
li =
|
| 1244 |
+
�
|
| 1245 |
+
At
|
| 1246 |
+
liρu
|
| 1247 |
+
li,
|
| 1248 |
+
if i ∈ Ku
|
| 1249 |
+
l ,
|
| 1250 |
+
˜At
|
| 1251 |
+
lρe
|
| 1252 |
+
li,
|
| 1253 |
+
if i ∈ Ke
|
| 1254 |
+
l .
|
| 1255 |
+
(13)
|
| 1256 |
+
The expression of the achievable SE shown in (11) holds
|
| 1257 |
+
for any choice of precoding scheme, any channel estimator
|
| 1258 |
+
and any channel distributions. Importantly, it accounts for
|
| 1259 |
+
any choice of coexistence technique between heterogeneous
|
| 1260 |
+
services, namely puncturing or superposition coding. The
|
| 1261 |
+
infinite-blocklength analysis above is established upon the
|
| 1262 |
+
assumption of block-fading channel model, entailing that each
|
| 1263 |
+
eMBB codeword has infinite length that spans a large number
|
| 1264 |
+
of independent fading realizations. This assumption cannot
|
| 1265 |
+
be applied to the URLLC case. As per the URLLC user,
|
| 1266 |
+
we consider a nonasymptotic analysis of the downlink error
|
| 1267 |
+
probability on a slot basis by applying the finite-blocklength
|
| 1268 |
+
information-theoretic results established in [8]. Firstly, we
|
| 1269 |
+
rewrite (10) as
|
| 1270 |
+
yt,u
|
| 1271 |
+
jk [n] = gjk
|
| 1272 |
+
jkqjk[n] + zjk[n],
|
| 1273 |
+
n = 1, . . . , nd,
|
| 1274 |
+
(14)
|
| 1275 |
+
where qjk[n]=At
|
| 1276 |
+
jk
|
| 1277 |
+
�ρu
|
| 1278 |
+
jkςu
|
| 1279 |
+
jk[n], and
|
| 1280 |
+
zjk[n] =
|
| 1281 |
+
�
|
| 1282 |
+
i∈Ku
|
| 1283 |
+
j \{k}
|
| 1284 |
+
gji
|
| 1285 |
+
jkqji[n] +
|
| 1286 |
+
�
|
| 1287 |
+
i∈Ke
|
| 1288 |
+
j
|
| 1289 |
+
gji
|
| 1290 |
+
jk ˜At
|
| 1291 |
+
j
|
| 1292 |
+
�
|
| 1293 |
+
ρe
|
| 1294 |
+
jiςe
|
| 1295 |
+
ji[n]
|
| 1296 |
+
+
|
| 1297 |
+
L
|
| 1298 |
+
�
|
| 1299 |
+
l=1
|
| 1300 |
+
l̸=j
|
| 1301 |
+
�
|
| 1302 |
+
� �
|
| 1303 |
+
i∈Ku
|
| 1304 |
+
l
|
| 1305 |
+
gli
|
| 1306 |
+
jkqli[n] +
|
| 1307 |
+
�
|
| 1308 |
+
i∈Ke
|
| 1309 |
+
l
|
| 1310 |
+
gli
|
| 1311 |
+
jk ˜At
|
| 1312 |
+
l
|
| 1313 |
+
�
|
| 1314 |
+
ρe
|
| 1315 |
+
liςe
|
| 1316 |
+
li[n]
|
| 1317 |
+
�
|
| 1318 |
+
�
|
| 1319 |
+
+ wjk[n] .
|
| 1320 |
+
(15)
|
| 1321 |
+
However, URLLC user k in cell j has not access to gjk
|
| 1322 |
+
jk, but
|
| 1323 |
+
performs data decoding by only leveraging its mean value,
|
| 1324 |
+
�gjk
|
| 1325 |
+
jk = E
|
| 1326 |
+
�
|
| 1327 |
+
(hj
|
| 1328 |
+
jk)Hwjk
|
| 1329 |
+
�
|
| 1330 |
+
, which is treated as perfect. This
|
| 1331 |
+
estimate is accurate if channel hardening holds. Notice that, the
|
| 1332 |
+
precoded channel gjk
|
| 1333 |
+
jk is frequency-flat and time-invariant over
|
| 1334 |
+
the transmission of the nd-length URLLC codeword in slot t.
|
| 1335 |
+
Moreover, gjk
|
| 1336 |
+
jk remains constant for any other transmission
|
| 1337 |
+
from BS j to user k over slots in the same TDD frame.
|
| 1338 |
+
Given all channels and precoding vectors, the effective noise
|
| 1339 |
+
terms {zjk[n] ∈ C; n = 1, . . . , nd} are random variables
|
| 1340 |
+
conditionally i.i.d. with variance σ2
|
| 1341 |
+
jk, i.e., CN
|
| 1342 |
+
�
|
| 1343 |
+
0, σ2
|
| 1344 |
+
jk
|
| 1345 |
+
�
|
| 1346 |
+
, given
|
| 1347 |
+
by
|
| 1348 |
+
σ2
|
| 1349 |
+
jk =
|
| 1350 |
+
�
|
| 1351 |
+
i∈Ku
|
| 1352 |
+
j \{k}
|
| 1353 |
+
At
|
| 1354 |
+
jiρu
|
| 1355 |
+
ji|gji
|
| 1356 |
+
jk|2 +
|
| 1357 |
+
�
|
| 1358 |
+
i∈Ke
|
| 1359 |
+
j
|
| 1360 |
+
˜At
|
| 1361 |
+
jρe
|
| 1362 |
+
ji|gji
|
| 1363 |
+
jk|2
|
| 1364 |
+
+
|
| 1365 |
+
L
|
| 1366 |
+
�
|
| 1367 |
+
l=1
|
| 1368 |
+
l̸=j
|
| 1369 |
+
�
|
| 1370 |
+
� �
|
| 1371 |
+
i∈Ku
|
| 1372 |
+
l
|
| 1373 |
+
At
|
| 1374 |
+
liρu
|
| 1375 |
+
li|gli
|
| 1376 |
+
jk|2+
|
| 1377 |
+
�
|
| 1378 |
+
i∈Ke
|
| 1379 |
+
l
|
| 1380 |
+
˜At
|
| 1381 |
+
lρe
|
| 1382 |
+
li|gli
|
| 1383 |
+
jk|2
|
| 1384 |
+
�
|
| 1385 |
+
� + σ2
|
| 1386 |
+
d .
|
| 1387 |
+
(16)
|
| 1388 |
+
To determine the transmitted codeword
|
| 1389 |
+
qjk = [qjk[1], . . . , qjk[nd]]T ,
|
| 1390 |
+
user k in cell j employs a mismatched scaled nearest-neighbor
|
| 1391 |
+
(SNN) decoder [30], with which selects the codeword �qjk
|
| 1392 |
+
from the codebook C by applying the rule
|
| 1393 |
+
�qjk = arg min
|
| 1394 |
+
�qjk∈C
|
| 1395 |
+
���yt,u
|
| 1396 |
+
jk − �gjk
|
| 1397 |
+
jk�qjk
|
| 1398 |
+
���
|
| 1399 |
+
2
|
| 1400 |
+
,
|
| 1401 |
+
(17)
|
| 1402 |
+
|
| 1403 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 1404 |
+
9
|
| 1405 |
+
where yt,u
|
| 1406 |
+
jk =[yt,u
|
| 1407 |
+
jk [1], . . . , yt,u
|
| 1408 |
+
jk [nd]]T ∈Cnd is the received data
|
| 1409 |
+
vector.
|
| 1410 |
+
Let ϵdl
|
| 1411 |
+
jk = Pr {�qjk ̸= qjk} be the downlink error probability
|
| 1412 |
+
experienced by the URLLC user k in cell j achieved by the
|
| 1413 |
+
SNN decoding. An upper bound on ϵdl
|
| 1414 |
+
jk is obtained by using
|
| 1415 |
+
the standard random-coding approach [31],
|
| 1416 |
+
ϵdl
|
| 1417 |
+
jk ≤Egjk
|
| 1418 |
+
jk
|
| 1419 |
+
�
|
| 1420 |
+
Pr
|
| 1421 |
+
� nd
|
| 1422 |
+
�
|
| 1423 |
+
n=1
|
| 1424 |
+
ıs(qjk[n], yt,u
|
| 1425 |
+
jk [n]) ≤ log m−1
|
| 1426 |
+
r
|
| 1427 |
+
����gjk
|
| 1428 |
+
jk
|
| 1429 |
+
��
|
| 1430 |
+
,
|
| 1431 |
+
(18)
|
| 1432 |
+
where m=2b is the number of codewords with length nd that
|
| 1433 |
+
convey b information bits, r is a random variable uniformly
|
| 1434 |
+
distributed in the interval [0, 1] and ıs(qjk[n], yt,u
|
| 1435 |
+
jk [n]) is the
|
| 1436 |
+
generalized information density, given by
|
| 1437 |
+
ıs(qjk[n], yt,u
|
| 1438 |
+
jk [n])
|
| 1439 |
+
= −s
|
| 1440 |
+
���yt,u
|
| 1441 |
+
jk [n] −�gjk
|
| 1442 |
+
jkqjk[n]
|
| 1443 |
+
���
|
| 1444 |
+
2
|
| 1445 |
+
+
|
| 1446 |
+
s|yt,u
|
| 1447 |
+
jk [n]|2
|
| 1448 |
+
1+sρu
|
| 1449 |
+
jk|�gjk
|
| 1450 |
+
jk|2
|
| 1451 |
+
+ log(1+sρu
|
| 1452 |
+
jk|�gjk
|
| 1453 |
+
jk|2) ,
|
| 1454 |
+
(19)
|
| 1455 |
+
for all s > 0. In (18) the expectation is taken over the
|
| 1456 |
+
distribution of gjk
|
| 1457 |
+
jk, and the probability is computed with
|
| 1458 |
+
respect to the downlink data symbol {qjk[n]}nd
|
| 1459 |
+
n=1, the effective
|
| 1460 |
+
additive noise {zjk[n]}nd
|
| 1461 |
+
n=1, and the random variable r. The
|
| 1462 |
+
evaluation of the upper bound in (18) entails a very demanding
|
| 1463 |
+
numerical computation to firstly obtain the probability, and
|
| 1464 |
+
then to numerically tighten the upper bound value to the low
|
| 1465 |
+
error probability target of the URLLC use case by optimizing
|
| 1466 |
+
with respect to s.
|
| 1467 |
+
Luckily, we can reliably approximate the right-hand side
|
| 1468 |
+
of (18) in closed form, hence with a significant relief of the
|
| 1469 |
+
computational burden, by using the saddlepoint approximation
|
| 1470 |
+
provided in [8, Th. 2].
|
| 1471 |
+
The existence of a saddlepoint approximation is guaranteed
|
| 1472 |
+
by the fact that the third derivative of the moment-generating
|
| 1473 |
+
function of −ıs(qjk[n], yt,u
|
| 1474 |
+
jk [n]) exists in a neighborhood of
|
| 1475 |
+
zero delimited by the values ε<0<ε given by [8, Appendix
|
| 1476 |
+
B]
|
| 1477 |
+
ε = −
|
| 1478 |
+
�
|
| 1479 |
+
(ζb − ζa)2 + 4ζaζb(1 − µ) + ζa − ζb
|
| 1480 |
+
2ζaζb(1 − µ)
|
| 1481 |
+
,
|
| 1482 |
+
(20)
|
| 1483 |
+
ε =
|
| 1484 |
+
�
|
| 1485 |
+
(ζb − ζa)2 + 4ζaζb(1 − µ) − ζa + ζb
|
| 1486 |
+
2ζaζb(1 − µ)
|
| 1487 |
+
,
|
| 1488 |
+
(21)
|
| 1489 |
+
where
|
| 1490 |
+
ζa = s(ρu
|
| 1491 |
+
jk|gjk
|
| 1492 |
+
jk − �gjk
|
| 1493 |
+
jk|2 + σ2) ,
|
| 1494 |
+
(22)
|
| 1495 |
+
ζb =
|
| 1496 |
+
s
|
| 1497 |
+
1 + sρu
|
| 1498 |
+
jk|�gjk
|
| 1499 |
+
jk|2 (ρu
|
| 1500 |
+
jk|gjk
|
| 1501 |
+
jk|2 + σ2) ,
|
| 1502 |
+
(23)
|
| 1503 |
+
µ =
|
| 1504 |
+
s2 ���ρu
|
| 1505 |
+
jk|gjk
|
| 1506 |
+
jk|2 + σ2 − (gjk
|
| 1507 |
+
jk)
|
| 1508 |
+
∗�gjk
|
| 1509 |
+
jkρu
|
| 1510 |
+
jk
|
| 1511 |
+
���
|
| 1512 |
+
2
|
| 1513 |
+
ζaζb(1 + sρu
|
| 1514 |
+
jk|�gjk
|
| 1515 |
+
jk|2)
|
| 1516 |
+
.
|
| 1517 |
+
(24)
|
| 1518 |
+
The saddlepoint approximation hinges on the cumulant-
|
| 1519 |
+
generating function of −ıs(qjk[n], yt,u
|
| 1520 |
+
jk [n]) given by
|
| 1521 |
+
υ(ε) = log E
|
| 1522 |
+
�
|
| 1523 |
+
e−εıs(qjk[n],yt,u
|
| 1524 |
+
jk [n])�
|
| 1525 |
+
,
|
| 1526 |
+
(25)
|
| 1527 |
+
on its first derivative υ′(ζ), and second derivative υ′′(ζ), for
|
| 1528 |
+
all ε ∈ (ε, ε)
|
| 1529 |
+
υ(ε) =−ε log(1 + sρu
|
| 1530 |
+
jk|�gjk
|
| 1531 |
+
jk|2)
|
| 1532 |
+
− log(1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2)
|
| 1533 |
+
(26)
|
| 1534 |
+
υ′(ε) =− log(1 + sρu
|
| 1535 |
+
jk|�gjk
|
| 1536 |
+
jk|2)
|
| 1537 |
+
−
|
| 1538 |
+
(ζb − ζa) − 2ζaζb(1 − µ)ε
|
| 1539 |
+
1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2
|
| 1540 |
+
(27)
|
| 1541 |
+
υ′′(ε) =
|
| 1542 |
+
�
|
| 1543 |
+
(ζb − ζa) − 2ζaζb(1 − µ)ε
|
| 1544 |
+
1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2
|
| 1545 |
+
�2
|
| 1546 |
+
+
|
| 1547 |
+
2ζaζb(1 − µ)
|
| 1548 |
+
1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2 .
|
| 1549 |
+
(28)
|
| 1550 |
+
Let m = endR for some strictly positive transmission rate
|
| 1551 |
+
R = (log m)/nd, and let ε ∈ (ε, ε) be the solution to the
|
| 1552 |
+
equation R=−υ′(ε). Let Is be the generalized mutual infor-
|
| 1553 |
+
mation [30] defined as Is = E {ıs(qjk[1], vjk[1])} = −υ′(0).
|
| 1554 |
+
Lastly, consider the critical rate [31, Eq. (5.6.30)] given by
|
| 1555 |
+
Rcr
|
| 1556 |
+
s
|
| 1557 |
+
= −υ′(1). Then, we have three possible saddlepoint
|
| 1558 |
+
approximations for the error probability upper bound [8].
|
| 1559 |
+
If ε ∈ [0, 1], then Rcr
|
| 1560 |
+
s ≤ R ≤ Is and
|
| 1561 |
+
Pr
|
| 1562 |
+
� nd
|
| 1563 |
+
�
|
| 1564 |
+
n=1
|
| 1565 |
+
ıs(qjk[n], yt,u
|
| 1566 |
+
jk [n]) ≤ log endR − 1
|
| 1567 |
+
r
|
| 1568 |
+
�
|
| 1569 |
+
≈end[υ(ε)+εR] [Ψnd,ε(ε)+Ψnd,ε(1−ε)] ,
|
| 1570 |
+
(29)
|
| 1571 |
+
where
|
| 1572 |
+
Ψnd,ε(ℓ) ≜ e
|
| 1573 |
+
1
|
| 1574 |
+
2 ndℓ2υ′′(ε)Q
|
| 1575 |
+
�
|
| 1576 |
+
ℓ
|
| 1577 |
+
�
|
| 1578 |
+
ndυ′′(ε)
|
| 1579 |
+
�
|
| 1580 |
+
.
|
| 1581 |
+
(30)
|
| 1582 |
+
If ε > 1, then R < Rcr
|
| 1583 |
+
s and
|
| 1584 |
+
Pr
|
| 1585 |
+
� nd
|
| 1586 |
+
�
|
| 1587 |
+
n=1
|
| 1588 |
+
ıs(qjk[n], yt,u
|
| 1589 |
+
jk [n]) ≤ log endR − 1
|
| 1590 |
+
r
|
| 1591 |
+
�
|
| 1592 |
+
≈ end[υ(1)+R] �
|
| 1593 |
+
�Ψnd(1, 1) + �Ψnd(0, −1)
|
| 1594 |
+
�
|
| 1595 |
+
,
|
| 1596 |
+
(31)
|
| 1597 |
+
where
|
| 1598 |
+
�Ψnd(ℓ1, ℓ2)≜endℓ1[Rcr
|
| 1599 |
+
s −R+ 1
|
| 1600 |
+
2 υ′′(1)]
|
| 1601 |
+
×Q
|
| 1602 |
+
�
|
| 1603 |
+
ℓ1
|
| 1604 |
+
�
|
| 1605 |
+
ndυ′′(1)+ℓ2
|
| 1606 |
+
nd(Rcr
|
| 1607 |
+
s −R)
|
| 1608 |
+
�
|
| 1609 |
+
ndυ′′(1)
|
| 1610 |
+
�
|
| 1611 |
+
.
|
| 1612 |
+
(32)
|
| 1613 |
+
If ε < 0, then R > Is and
|
| 1614 |
+
Pr
|
| 1615 |
+
� nd
|
| 1616 |
+
�
|
| 1617 |
+
n=1
|
| 1618 |
+
ıs(qjk[n], yt,u
|
| 1619 |
+
jk [n]) ≤ log endR−1
|
| 1620 |
+
r
|
| 1621 |
+
�
|
| 1622 |
+
≈ 1−end[υ(ε)+εR] [Ψnd,ε(−ε)−Ψnd,ε(1−ε)] .
|
| 1623 |
+
(33)
|
| 1624 |
+
The saddlepoint approximation is more accurate in the URLLC
|
| 1625 |
+
massive MIMO regime than the conventionally-used normal
|
| 1626 |
+
approximation [23] as the former characterizes the exponential
|
| 1627 |
+
decay of the error probability, i.e., the error-exponent, as a
|
| 1628 |
+
function of the URLLC codeword length, and the transmission
|
| 1629 |
+
rate requirement R, while uses the Berry-Esseen central-limit
|
| 1630 |
+
theorem (used in the normal approximation) to only charac-
|
| 1631 |
+
terize the multiplicative factor following the error-exponent
|
| 1632 |
+
term. The normal approximation, whose formulation directly
|
| 1633 |
+
involves the generalized mutual information, Is, but does not
|
| 1634 |
+
|
| 1635 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 1636 |
+
10
|
| 1637 |
+
R, is accurate only when Is is close to R. This operating
|
| 1638 |
+
regime does not hold for URLLC wherein R is typically lower
|
| 1639 |
+
than Is to accomplish the very low error probability targets.
|
| 1640 |
+
Once that the approximate upper bounds on the downlink error
|
| 1641 |
+
probability are obtained via saddlepoint approximation, we
|
| 1642 |
+
compute the downlink network availability [8], ηdl, as
|
| 1643 |
+
ηdl = Pr
|
| 1644 |
+
�
|
| 1645 |
+
ϵdl
|
| 1646 |
+
jk ≤ ϵdl
|
| 1647 |
+
target
|
| 1648 |
+
�
|
| 1649 |
+
(34)
|
| 1650 |
+
which measures the probability that the target error probability
|
| 1651 |
+
ϵdl
|
| 1652 |
+
target is satisfied by an arbitrary user k in cell j, in presence of
|
| 1653 |
+
interfering users. While the expectation in the error probability
|
| 1654 |
+
definition is taken with respect to the small-scale fading and
|
| 1655 |
+
the effective additive noise, given a large-scale fading real-
|
| 1656 |
+
ization, the probability in the network availability definition is
|
| 1657 |
+
computed with respect to the large-scale fading (i.e., path loss,
|
| 1658 |
+
shadowing etc.). The expression of the network availability
|
| 1659 |
+
shown in (34) holds for any choice of precoding scheme, any
|
| 1660 |
+
channel estimator and any channel distributions. Importantly,
|
| 1661 |
+
it accounts for any choice of coexistence technique between
|
| 1662 |
+
heterogeneous services, namely puncturing or superposition
|
| 1663 |
+
coding.
|
| 1664 |
+
IV. PRECODING AND POWER CONTROL
|
| 1665 |
+
The choice of the precoding scheme and of the downlink
|
| 1666 |
+
power allocation deeply affects the SE of the eMBB users and
|
| 1667 |
+
the network availability for the URLLC users. For the sake of
|
| 1668 |
+
comparison, we herein consider three precoding schemes and
|
| 1669 |
+
three power allocation strategies. The general expression for
|
| 1670 |
+
the precoding vector intended for user k in cell j is given by
|
| 1671 |
+
wjk =
|
| 1672 |
+
vjk
|
| 1673 |
+
∥vjk∥ ,
|
| 1674 |
+
(35)
|
| 1675 |
+
where the denominator serves to make the average power of
|
| 1676 |
+
the precoding vector unitary, and vjk is next characterized.
|
| 1677 |
+
Multi-cell MMSE (M-MMSE):
|
| 1678 |
+
vM−MMSE
|
| 1679 |
+
jk
|
| 1680 |
+
=
|
| 1681 |
+
�
|
| 1682 |
+
�
|
| 1683 |
+
� L
|
| 1684 |
+
�
|
| 1685 |
+
l=1
|
| 1686 |
+
�Hj
|
| 1687 |
+
l Pl( �Hj
|
| 1688 |
+
l )H+Υj+σ2
|
| 1689 |
+
uIM
|
| 1690 |
+
�−1
|
| 1691 |
+
�Hj
|
| 1692 |
+
jPj
|
| 1693 |
+
�
|
| 1694 |
+
�
|
| 1695 |
+
:,k
|
| 1696 |
+
where Pl =diag(pli, . . . , plK)∈RK×K is the matrix with the
|
| 1697 |
+
uplink transmit powers of all the users in cell l as diagonal
|
| 1698 |
+
elements, Υj = �L
|
| 1699 |
+
l=1
|
| 1700 |
+
�K
|
| 1701 |
+
i=1 pliCj
|
| 1702 |
+
li, and �Hj
|
| 1703 |
+
l = [�hj
|
| 1704 |
+
l1 . . . �hj
|
| 1705 |
+
lK].
|
| 1706 |
+
M-MMSE precoding provides a nearly optimal downlink SE
|
| 1707 |
+
but requires each BS to acquire CSI and statistical CSI of all
|
| 1708 |
+
the users of the multi-cell system. Moreover, the computation
|
| 1709 |
+
of the precoding vector, which entails inverting a matrix
|
| 1710 |
+
M ×M, may be demanding for large BS arrays. Although
|
| 1711 |
+
impractical, M-MMSE precoding will serve as benchmark.
|
| 1712 |
+
Regularized zero-forcing (RZF):
|
| 1713 |
+
vRZF
|
| 1714 |
+
jk
|
| 1715 |
+
=
|
| 1716 |
+
�
|
| 1717 |
+
�Hj
|
| 1718 |
+
j
|
| 1719 |
+
�
|
| 1720 |
+
( �Hj
|
| 1721 |
+
j)H �Hj
|
| 1722 |
+
j + σ2
|
| 1723 |
+
uP−1
|
| 1724 |
+
j
|
| 1725 |
+
�−1�
|
| 1726 |
+
:,k
|
| 1727 |
+
.
|
| 1728 |
+
Compared to M-MMSE, RZF precoding requires each BS to
|
| 1729 |
+
estimate the channels of only its users. Moreover, computing
|
| 1730 |
+
the RZF precoding vector is computationally cheaper since
|
| 1731 |
+
the size of the matrix to be inverted is K×K. However, RZF
|
| 1732 |
+
does only suppress the intra-cell interference while, unlike M-
|
| 1733 |
+
MMSE, does not provide to the users any protection mech-
|
| 1734 |
+
anism against inter-cell interference and channel estimation
|
| 1735 |
+
error.
|
| 1736 |
+
Maximum Ratio (MR): vMR
|
| 1737 |
+
jk = �hj
|
| 1738 |
+
jk. It is computationally the
|
| 1739 |
+
cheapest but performance-wise the worst precoding scheme.
|
| 1740 |
+
MR only aims at maximizing the power of the desired signal,
|
| 1741 |
+
providing no interference-suppression mechanism. MR will
|
| 1742 |
+
serve as lower bound on the performance.
|
| 1743 |
+
Properly allocating the downlink power can make all the
|
| 1744 |
+
difference to meet the strict reliability requirements of the
|
| 1745 |
+
URLLC and to improve the SE of the eMBB users. Next, we
|
| 1746 |
+
provide three power allocation schemes that take into account
|
| 1747 |
+
the power budget at the BSs, the adopted eMBB-URLLC
|
| 1748 |
+
coexistence strategy and the URLLC activation pattern, which
|
| 1749 |
+
is known at the BS in the downlink operation.
|
| 1750 |
+
Equal power allocation (EPA): It consists in setting
|
| 1751 |
+
ρu
|
| 1752 |
+
ji = ρmax
|
| 1753 |
+
j
|
| 1754 |
+
At
|
| 1755 |
+
ji
|
| 1756 |
+
˜At
|
| 1757 |
+
jKe + �
|
| 1758 |
+
k∈Ku
|
| 1759 |
+
j
|
| 1760 |
+
At
|
| 1761 |
+
jk
|
| 1762 |
+
, i ∈ Ku
|
| 1763 |
+
j
|
| 1764 |
+
(36)
|
| 1765 |
+
ρe
|
| 1766 |
+
jk = ρmax
|
| 1767 |
+
j
|
| 1768 |
+
˜At
|
| 1769 |
+
j
|
| 1770 |
+
˜At
|
| 1771 |
+
jKe + �
|
| 1772 |
+
i∈Ku
|
| 1773 |
+
j
|
| 1774 |
+
At
|
| 1775 |
+
ji
|
| 1776 |
+
, k ∈ Ke
|
| 1777 |
+
j
|
| 1778 |
+
(37)
|
| 1779 |
+
to satisfy the per-BS power constraint in (8) with equality and
|
| 1780 |
+
allocate the same share of power to each user, regardless of
|
| 1781 |
+
its channel conditions and its service requirements.
|
| 1782 |
+
Weighted fractional power allocation (FPA): it consists in
|
| 1783 |
+
setting the powers as
|
| 1784 |
+
ρu
|
| 1785 |
+
ji =
|
| 1786 |
+
ωρmax
|
| 1787 |
+
j
|
| 1788 |
+
At
|
| 1789 |
+
ji(βj
|
| 1790 |
+
ji)
|
| 1791 |
+
ν
|
| 1792 |
+
(1−ω) ˜At
|
| 1793 |
+
j
|
| 1794 |
+
�
|
| 1795 |
+
k∈Ke
|
| 1796 |
+
j
|
| 1797 |
+
(βj
|
| 1798 |
+
jk)
|
| 1799 |
+
ν + ω �
|
| 1800 |
+
u∈Ku
|
| 1801 |
+
j
|
| 1802 |
+
At
|
| 1803 |
+
ju(βj
|
| 1804 |
+
ju)
|
| 1805 |
+
ν , i ∈ Ku
|
| 1806 |
+
j
|
| 1807 |
+
(38)
|
| 1808 |
+
ρe
|
| 1809 |
+
jk =
|
| 1810 |
+
(1 − ω)ρmax
|
| 1811 |
+
j
|
| 1812 |
+
˜At
|
| 1813 |
+
j(βj
|
| 1814 |
+
jk)
|
| 1815 |
+
ν
|
| 1816 |
+
(1−ω) ˜At
|
| 1817 |
+
j
|
| 1818 |
+
�
|
| 1819 |
+
e∈Ke
|
| 1820 |
+
j
|
| 1821 |
+
(βj
|
| 1822 |
+
je)
|
| 1823 |
+
ν + ω �
|
| 1824 |
+
i∈Ku
|
| 1825 |
+
j
|
| 1826 |
+
At
|
| 1827 |
+
ji(βj
|
| 1828 |
+
ji)
|
| 1829 |
+
ν , k ∈ Ke
|
| 1830 |
+
j
|
| 1831 |
+
(39)
|
| 1832 |
+
where the weight ω ∈ (0, 1) adjusts the amount of downlink
|
| 1833 |
+
power to be allocated to the URLLC users, while ν establishes
|
| 1834 |
+
the power control policy as a function of the average channel
|
| 1835 |
+
gain. An opportunistic power allocation is attained by setting
|
| 1836 |
+
ν > 0, with which more power is allocated to the users with
|
| 1837 |
+
better channel conditions. Conversely, fairness is supported
|
| 1838 |
+
by setting ν < 0, with which more power is allocated to the
|
| 1839 |
+
users with worse channel conditions. If ω ∈ (0.5, 1) a larger
|
| 1840 |
+
share of power is allocated to the URLLC users rather than
|
| 1841 |
+
to the eMBB users, whereas it is the other way around if
|
| 1842 |
+
ω ∈ (0, 0.5). Notice that, if ν = 0 and ω = 0.5, then the FPA
|
| 1843 |
+
reduces to the EPA.
|
| 1844 |
+
Optimal power allocation (OPA) for max product SINR:
|
| 1845 |
+
|
| 1846 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 1847 |
+
11
|
| 1848 |
+
The powers are the solution of the optimization problem
|
| 1849 |
+
maximize
|
| 1850 |
+
{ρs
|
| 1851 |
+
jk}
|
| 1852 |
+
L
|
| 1853 |
+
�
|
| 1854 |
+
j=1
|
| 1855 |
+
K
|
| 1856 |
+
�
|
| 1857 |
+
k=1
|
| 1858 |
+
SINRt,s
|
| 1859 |
+
jk
|
| 1860 |
+
(40a)
|
| 1861 |
+
s.t.
|
| 1862 |
+
K
|
| 1863 |
+
�
|
| 1864 |
+
k=1
|
| 1865 |
+
ϱt
|
| 1866 |
+
jk ≤ ρmax
|
| 1867 |
+
j
|
| 1868 |
+
, ∀j,
|
| 1869 |
+
(40b)
|
| 1870 |
+
where the superscript s = e if user k ∈ Ke
|
| 1871 |
+
j, s = u otherwise,
|
| 1872 |
+
and ϱt
|
| 1873 |
+
jk is given in (13). Without further entangling the
|
| 1874 |
+
notation in (40), we remark that the SINR of inactive users
|
| 1875 |
+
is fictitiously set to 1 to preserve the optimization problem
|
| 1876 |
+
formulation. This power allocation strategy treats all the users
|
| 1877 |
+
as eMBB users, hence it would be optimal if there would
|
| 1878 |
+
be no URLLC users active in a given slot, by maximizing a
|
| 1879 |
+
lower bound on the sum SE of the multi-cell system. Although
|
| 1880 |
+
the SINR expression in (12) is meaningless when applied to
|
| 1881 |
+
a URLLC user, we can still heuristically plug the URLLC
|
| 1882 |
+
powers resulting from (40) into the error probability analysis
|
| 1883 |
+
and motivate this approach by looking at the performance.
|
| 1884 |
+
All the considered power allocation schemes, in principle, run
|
| 1885 |
+
on a slot-basis in order to adapt the power coefficients to the
|
| 1886 |
+
URLLC activation pattern. Fortunately, these schemes only
|
| 1887 |
+
rely on the knowledge of the statistical CSI which allows to
|
| 1888 |
+
pre-compute some power coefficients or to keep the power
|
| 1889 |
+
allocation for multiple slots/frames in case of no macroscopic
|
| 1890 |
+
changes in the propagation environment. Unlike the EPA and
|
| 1891 |
+
the FPA schemes, the OPA scheme requires a certain degree
|
| 1892 |
+
of cooperation among the BSs which must send statistical CSI
|
| 1893 |
+
to let a central processing unit (e.g., a master BS) compute the
|
| 1894 |
+
SINR of all the users and solve the optimization problem, and
|
| 1895 |
+
feed them back with the power coefficients to use. This would
|
| 1896 |
+
introduce intolerable delay for the URLLC users. Moreover,
|
| 1897 |
+
solving problem (40), although efficiently as a geometric
|
| 1898 |
+
program [5, Th. 7.2], is unlikely to be doable within a time-
|
| 1899 |
+
slot, especially for crowded networks. Hence, the OPA scheme
|
| 1900 |
+
is of limited practical use, but will serve for benchmarking
|
| 1901 |
+
purposes.
|
| 1902 |
+
V. SIMULATION RESULTS
|
| 1903 |
+
In this section, we present and discuss the results of our
|
| 1904 |
+
simulations in which the coexistence of eMBB and URLLC
|
| 1905 |
+
is deeply analyzed under different setups. Specifically, we shed
|
| 1906 |
+
the light on the impact of different factors on the performance,
|
| 1907 |
+
such as the transmission technique and the precoding scheme,
|
| 1908 |
+
the power control strategy, the imperfect CSI and estimation
|
| 1909 |
+
overhead, the pilot contamination, the length and number of
|
| 1910 |
+
slots in a TDD frame, and the characteristics of the URLLC
|
| 1911 |
+
activation pattern.
|
| 1912 |
+
Our simulation scenario consists of a multi-cell massive
|
| 1913 |
+
MIMO system with L = 4 cells. Each cell covers a nominal
|
| 1914 |
+
area of 500×500 squared meters, and is served by a BS, placed
|
| 1915 |
+
at the cell center, equipped with a uniform linear array (ULA)
|
| 1916 |
+
with M = 100 equispaced half-wavelength antenna elements.
|
| 1917 |
+
A wrap-around topology is implemented as in [5, Sec. 4.1.3].
|
| 1918 |
+
The users are dropped uniformly at random over the coverage
|
| 1919 |
+
area but at a minimum distance of 25 m from the BS. In
|
| 1920 |
+
addition, we assume that the URLLC users are distributed
|
| 1921 |
+
uniformly at random in an area of 125×125 squared meters
|
| 1922 |
+
that surrounds the BS. A random realization of the user
|
| 1923 |
+
locations determines a set of large-scale fading coefficients
|
| 1924 |
+
and constitutes a snapshot of the network. For a given network
|
| 1925 |
+
snapshot the achievable downlink SEs of the active eMBB
|
| 1926 |
+
users are computed according to (11), while the downlink
|
| 1927 |
+
error probabilities of the URLLC users are obtained according
|
| 1928 |
+
to the approximations (29)-(33). The cumulative distribution
|
| 1929 |
+
function (CDF) of the SE and the network availability are then
|
| 1930 |
+
drawn over many network snapshots. The channel correlation
|
| 1931 |
+
matrices are generated according to the popular local scatter-
|
| 1932 |
+
ing spatial correlation model [5, Sec. 2.6], and we assume that
|
| 1933 |
+
the scattering is only localized around the users and uniformly
|
| 1934 |
+
distributed at random with delay spread 25◦ degrees [8]. The
|
| 1935 |
+
average channel gain is obtained according to the non-line-
|
| 1936 |
+
of-sight macro cell 3GPP model for 2 GHz carriers [32], and
|
| 1937 |
+
given in dB by
|
| 1938 |
+
βk =−35.3 − 37.6 log10
|
| 1939 |
+
� dk
|
| 1940 |
+
1 m
|
| 1941 |
+
�
|
| 1942 |
+
+ Fk
|
| 1943 |
+
for an arbitrary user k placed at a distance dk from its BS, and
|
| 1944 |
+
where Fk ∼ N
|
| 1945 |
+
�
|
| 1946 |
+
0, σ2
|
| 1947 |
+
sh
|
| 1948 |
+
�
|
| 1949 |
+
models the log-normal shadowing as
|
| 1950 |
+
an i.i.d. random variable with standard deviation σsh = 4 dB.
|
| 1951 |
+
The transmission bandwidth is 20 MHz, and the receiver noise
|
| 1952 |
+
power equals -94 dBm both for the uplink and the downlink.
|
| 1953 |
+
Moreover, we let ρmax
|
| 1954 |
+
j
|
| 1955 |
+
=46 dBm, j =1, . . . , L, and the uplink
|
| 1956 |
+
transmit power, both for pilot and payload data, be 23 dBm
|
| 1957 |
+
for all the users. We assume that the URLLC packet consists
|
| 1958 |
+
of b=160 bits, yielding a transmission rate R=b/nd, which
|
| 1959 |
+
is suitable for factory automation use cases, such as motion
|
| 1960 |
+
controls, and in line with the low latency requirements [33,
|
| 1961 |
+
Annex A]. Lastly, without loss of generality, we set τu =0 as
|
| 1962 |
+
we only focus on the downlink performance. Unless otherwise
|
| 1963 |
+
stated, we consider TDD frames with length τc = 580 channel
|
| 1964 |
+
uses, given by Tc = 2 ms and Bc = 290 kHz, which supports
|
| 1965 |
+
user mobility up to 67.50 km/h.
|
| 1966 |
+
In the first set of simulations we consider the following
|
| 1967 |
+
setup: K = 20, α = 0.2, au = 10−0.5, τp = 80 (no pilot
|
| 1968 |
+
contamination), T = 5 slots of length nd = 100 channel
|
| 1969 |
+
uses. In Fig. 2 we plot the CDFs of the achievable downlink
|
| 1970 |
+
SE per “active” eMBB user obtained for different precoding
|
| 1971 |
+
and power allocation strategies, both for superposition coding
|
| 1972 |
+
(top subfigure) and puncturing technique (bottom subfigure).
|
| 1973 |
+
Under these assumptions, SPC is greatly superior than PUNC,
|
| 1974 |
+
precoding and power allocation strategies being equal. M-
|
| 1975 |
+
MMSE with OPA gives, as expected, the best SE but EPA per-
|
| 1976 |
+
forms almost equally well, regardless of the precoding scheme.
|
| 1977 |
+
RZF provides a practical excellent trad-off between M-MMSE
|
| 1978 |
+
and MR. These results suggest that we are approximately
|
| 1979 |
+
operating in an interference-free scenario, thanks to the full
|
| 1980 |
+
and partial interference-suppression mechanism provided by
|
| 1981 |
+
M-MMSE and RZF, respectively. As per the FPA strategy,
|
| 1982 |
+
in these simulations we have selected ν = 0.5 to promote
|
| 1983 |
+
an opportunistic power allocation and ω = 0.6 to prioritize
|
| 1984 |
+
the URLLC users. Such a choice does not favor the eMBB
|
| 1985 |
+
users and justify the worst performance of FPA among the
|
| 1986 |
+
considered strategies when SPC is applied.
|
| 1987 |
+
|
| 1988 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 1989 |
+
12
|
| 1990 |
+
Fig. 2.
|
| 1991 |
+
CDFs of the achievable downlink SE per active eMBB user, for
|
| 1992 |
+
different transmission, precoding and power allocation strategies. Settings:
|
| 1993 |
+
K =20, α=0.2, au =10−0.5, τp =80, T =5, nd =100.
|
| 1994 |
+
Fig. 3.
|
| 1995 |
+
CDFs of the achievable downlink sum SE per cell, for different
|
| 1996 |
+
transmission, precoding and power allocation strategies. Settings: K = 20,
|
| 1997 |
+
α=0.2, au =10−0.5, τp =80, T =5, nd =100.
|
| 1998 |
+
Same conclusions hold for the results shown in Fig. 3 where
|
| 1999 |
+
the CDFs of the corresponding sum SE per cell are illustrated.
|
| 2000 |
+
In these figures, we mainly emphasize the eMBB service
|
| 2001 |
+
outage likely occurring when PUNC is adopted. We define
|
| 2002 |
+
the eMBB service outage, under PUNC operation, as
|
| 2003 |
+
ς out = Pr
|
| 2004 |
+
�
|
| 2005 |
+
�
|
| 2006 |
+
�
|
| 2007 |
+
�
|
| 2008 |
+
k∈Ke
|
| 2009 |
+
j
|
| 2010 |
+
SEe
|
| 2011 |
+
jk = 0
|
| 2012 |
+
�
|
| 2013 |
+
�
|
| 2014 |
+
� ,
|
| 2015 |
+
j =1, . . . , L ,
|
| 2016 |
+
where the probability is computed with respect to the large-
|
| 2017 |
+
scale fading. This probability for a BS to provide no service
|
| 2018 |
+
in a TDD frame to its eMBB users depends on the activation
|
| 2019 |
+
pattern of the URLLC users and the number of slots per
|
| 2020 |
+
frame. We will discuss this aspect in detail later. Under the
|
| 2021 |
+
SPC
|
| 2022 |
+
M-MMSE
|
| 2023 |
+
RZF
|
| 2024 |
+
MR
|
| 2025 |
+
0
|
| 2026 |
+
0.2
|
| 2027 |
+
0.4
|
| 2028 |
+
0.6
|
| 2029 |
+
0.8
|
| 2030 |
+
1
|
| 2031 |
+
PUNC
|
| 2032 |
+
M-MMSE
|
| 2033 |
+
RZF
|
| 2034 |
+
MR
|
| 2035 |
+
0
|
| 2036 |
+
0.2
|
| 2037 |
+
0.4
|
| 2038 |
+
0.6
|
| 2039 |
+
0.8
|
| 2040 |
+
1
|
| 2041 |
+
Fig. 4.
|
| 2042 |
+
Network availability for different transmission, precoding and power
|
| 2043 |
+
allocation strategies. Settings: K = 20, α = 0.2, au = 10−0.5, τp = 80,
|
| 2044 |
+
T =5, nd =100.
|
| 2045 |
+
Fig. 5.
|
| 2046 |
+
Downlink per-user error probability for different transmission and
|
| 2047 |
+
precoding strategies. Settings: EPA, K =20, α=0.2, au =10−0.5, τp =80,
|
| 2048 |
+
T =5, nd =100.
|
| 2049 |
+
settings considered in Fig. 3, the eMBB service outage is quite
|
| 2050 |
+
significant as amounts to about 30%.
|
| 2051 |
+
In Fig. 4 we move to the URLLC performance by showing
|
| 2052 |
+
the downlink network availability achieved when ϵdl
|
| 2053 |
+
target =
|
| 2054 |
+
10−5. Despite the interference caused by the eMBB users
|
| 2055 |
+
when SPC is performed, both M-MMSE and RZF are able to
|
| 2056 |
+
provide levels of network availability close to one, in line with
|
| 2057 |
+
PUNC, revealing a great ability of suppressing the interference
|
| 2058 |
+
and supporting high reliability. Conversely, MR provides poor
|
| 2059 |
+
performance in SPC when EPA or OPA (which is optimal for
|
| 2060 |
+
the eMBB users) schemes are used. Notice that, our choice for
|
| 2061 |
+
the parameters of the FPA scheme pays off for the combination
|
| 2062 |
+
SPC/MR. The network availability values shown in Fig. 4 are
|
| 2063 |
+
obtained by the error probabilities whose CDFs are illustrated
|
| 2064 |
+
in Fig. 5. To better understand its meaning, the network
|
| 2065 |
+
availability is given by the cross-point between the CDF of the
|
| 2066 |
+
per-user error probability and the vertical line representing the
|
| 2067 |
+
error probability target value, as Fig. 5 highlights (blue circle
|
| 2068 |
+
markers). From this set of simulations, we conclude that SPC
|
| 2069 |
+
is clearly superior to PUNC in terms of SE yet providing very
|
| 2070 |
+
high network availability, when M-MMSE or RZF are carried
|
| 2071 |
+
out. If MR is the only viable option (for instance due to strict
|
| 2072 |
+
complexity or hardware constraints), then SPC with FPA, upon
|
| 2073 |
+
properly setting the design parameters ν and ω, is an effective
|
| 2074 |
+
choice to keep the network availability high while preventing
|
| 2075 |
+
|
| 2076 |
+
0.5
|
| 2077 |
+
SPC0
|
| 2078 |
+
1
|
| 2079 |
+
2
|
| 2080 |
+
3
|
| 2081 |
+
4
|
| 2082 |
+
per-user SE [bit/
|
| 2083 |
+
1
|
| 2084 |
+
0.5
|
| 2085 |
+
PUNC
|
| 2086 |
+
0
|
| 2087 |
+
0.5
|
| 2088 |
+
1
|
| 2089 |
+
1.5
|
| 2090 |
+
2
|
| 2091 |
+
per-user SE [bit/s5
|
| 2092 |
+
6
|
| 2093 |
+
7
|
| 2094 |
+
8
|
| 2095 |
+
s/Hzl
|
| 2096 |
+
MR
|
| 2097 |
+
RZF
|
| 2098 |
+
M-MIMSE
|
| 2099 |
+
-FPA,V=0.5.w=0.6
|
| 2100 |
+
-EPA
|
| 2101 |
+
--OPA
|
| 2102 |
+
2.5
|
| 2103 |
+
3
|
| 2104 |
+
3.5
|
| 2105 |
+
4
|
| 2106 |
+
s/Hzl0.5SPC0
|
| 2107 |
+
0
|
| 2108 |
+
10
|
| 2109 |
+
20
|
| 2110 |
+
30
|
| 2111 |
+
40
|
| 2112 |
+
50
|
| 2113 |
+
per-cell sum SE [bi
|
| 2114 |
+
1
|
| 2115 |
+
0.5
|
| 2116 |
+
PUNC
|
| 2117 |
+
eMBB service outage
|
| 2118 |
+
0
|
| 2119 |
+
0
|
| 2120 |
+
10
|
| 2121 |
+
20
|
| 2122 |
+
30
|
| 2123 |
+
per-cell sum SE [bi60
|
| 2124 |
+
70
|
| 2125 |
+
80
|
| 2126 |
+
90
|
| 2127 |
+
/s/Hz)
|
| 2128 |
+
MR
|
| 2129 |
+
RZF
|
| 2130 |
+
M-MMSE
|
| 2131 |
+
-FPA,V=0.5.W=0.6
|
| 2132 |
+
-EPA
|
| 2133 |
+
-OPA
|
| 2134 |
+
40
|
| 2135 |
+
50
|
| 2136 |
+
60
|
| 2137 |
+
/s/HzlPUNC/M-MMSE
|
| 2138 |
+
PUNC/RZF
|
| 2139 |
+
0.8
|
| 2140 |
+
PUNC/MRT
|
| 2141 |
+
SPC/M-MMSE
|
| 2142 |
+
SPC/RZFutage
|
| 2143 |
+
error
|
| 2144 |
+
robabilityCDF
|
| 2145 |
+
SPC/MRT
|
| 2146 |
+
0.4
|
| 2147 |
+
FPA. v=0.5. w=0.6
|
| 2148 |
+
M-MMSE
|
| 2149 |
+
andRZF
|
| 2150 |
+
0.2
|
| 2151 |
+
0
|
| 2152 |
+
10-50
|
| 2153 |
+
10-40
|
| 2154 |
+
10-30
|
| 2155 |
+
10-2
|
| 2156 |
+
DL error probabtarget
|
| 2157 |
+
URLLC service
|
| 2158 |
+
MRT
|
| 2159 |
+
20
|
| 2160 |
+
10-10
|
| 2161 |
+
10-5
|
| 2162 |
+
100
|
| 2163 |
+
ilityON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 2164 |
+
13
|
| 2165 |
+
Fig. 6.
|
| 2166 |
+
Average per-user SE achieved by SPC with FPA, for different
|
| 2167 |
+
precoding schemes and values of ν, ω. The average is taken over 200 network
|
| 2168 |
+
snapshots. Settings: K = 20, α = 0.2, au = 10−0.5, τp = 80, T = 5,
|
| 2169 |
+
nd =100.
|
| 2170 |
+
any eMBB service outage.
|
| 2171 |
+
In this regard, we now focus on how to select ν and ω
|
| 2172 |
+
appropriately. By using the same settings as in the first set
|
| 2173 |
+
of simulations, in Fig. 6 we plot the average per-user SE
|
| 2174 |
+
assuming SPC and different precoding schemes with FPA
|
| 2175 |
+
as ν and ω vary. From the eMBB user perspective, it is
|
| 2176 |
+
preferable setting a small value for ω, and ν in the interval
|
| 2177 |
+
[−0.5, 0]. While the former is trivial, the latter needs further
|
| 2178 |
+
discussions. Indeed, recall that positive values for ν enable
|
| 2179 |
+
allocating more power to users with better channel conditions.
|
| 2180 |
+
Since we assume the URLLC users are uniformly distributed
|
| 2181 |
+
in a smaller area surrounding the BSs, it is very likely that they
|
| 2182 |
+
are closer to the BS than most of the eMBB users. Therefore,
|
| 2183 |
+
negative values for ν increase the fairness and improve eMBB
|
| 2184 |
+
users performance. Large values for both ω and ν excessively
|
| 2185 |
+
unbalance the power distribution in favor of the URLLC users,
|
| 2186 |
+
degrading the SE of the eMBB users.
|
| 2187 |
+
Conversely, small values for both ω and ν break down the
|
| 2188 |
+
network availability of the URLLC users in SPC operation,
|
| 2189 |
+
as clearly seen in Fig. 7. Nevertheless, both M-MMSE and
|
| 2190 |
+
RZF are able to provide levels of network availability close
|
| 2191 |
+
to 1 except when ν = −1, while MR is quite sensitive to
|
| 2192 |
+
this parameters tuning. Suppressing the multi-user interference
|
| 2193 |
+
is of a vital importance when SPC is adopted, and RZF,
|
| 2194 |
+
although not dealing with the inter-cell interference, is an
|
| 2195 |
+
excellent trade-off between performance and practicality. Fine-
|
| 2196 |
+
tuning the parameters of the FPA scheme yields satisfying
|
| 2197 |
+
performance when using MR. FPA becomes a valid, heuristic
|
| 2198 |
+
alternative to combat the multi-user interference whenever the
|
| 2199 |
+
latter cannot be removed by the precoding technique.
|
| 2200 |
+
Setting ω becomes pointless when using PUNC with FPA as
|
| 2201 |
+
only URLLC transmissions take place in the considered slot.
|
| 2202 |
+
Hence, in Fig. 8 and Fig. 9 we focus on the average SE per user
|
| 2203 |
+
and the network availability as only ν varies. For both cases we
|
| 2204 |
+
notice that an equal power allocation, i.e., ν =0, is desirable.
|
| 2205 |
+
As per the SE of the eMBB users, negative values of ν support
|
| 2206 |
+
Fig. 7.
|
| 2207 |
+
Network availability achieved by SPC with FPA, for different
|
| 2208 |
+
precoding schemes and values of ν, ω. Settings: K = 20, α = 0.2,
|
| 2209 |
+
au =10−0.5, τp =80, T =5, nd =100.
|
| 2210 |
+
-1
|
| 2211 |
+
-0.5
|
| 2212 |
+
0
|
| 2213 |
+
0.5
|
| 2214 |
+
1
|
| 2215 |
+
0.2
|
| 2216 |
+
0.4
|
| 2217 |
+
0.6
|
| 2218 |
+
0.8
|
| 2219 |
+
1
|
| 2220 |
+
1.2
|
| 2221 |
+
1.4
|
| 2222 |
+
Fig. 8.
|
| 2223 |
+
Average per-user SE (with 95% confidence interval) achieved by
|
| 2224 |
+
PUNC with FPA, for different precoding schemes and values of ν. The average
|
| 2225 |
+
is taken over 200 network snapshots. Settings: K =20, α=0.2, au =10−0.5,
|
| 2226 |
+
τp =80, T =5, nd =100.
|
| 2227 |
+
lower SEs (e.g., the 95%-likely SE per user), hence the fairness
|
| 2228 |
+
among the users, while large positive values of ν support the
|
| 2229 |
+
peak SE in a greedy fashion, neglecting lower SEs. Therefore,
|
| 2230 |
+
ν =0 is sound if the average SE is targeted, especially when
|
| 2231 |
+
the multi-user interference is partially or fully canceled. As per
|
| 2232 |
+
the network availability of the URLLC users, any choice of
|
| 2233 |
+
ν ∈ [−1, 1] is solid as long as M-MMSE or RZF are employed,
|
| 2234 |
+
while the performance of MR is relatively penalized whenever
|
| 2235 |
+
a non-neutral choice for ν is taken. Presumably, the number of
|
| 2236 |
+
URLLC users simultaneously active in the same slot (resulting
|
| 2237 |
+
from the chosen values of α and au) is such that the multi-user
|
| 2238 |
+
interference is not significant.
|
| 2239 |
+
Next, we evaluate the performance as a function of the
|
| 2240 |
+
number of the slots in a TDD frame, T, and the size of
|
| 2241 |
+
the slot, nd, which in turn determines the URLLC codeword
|
| 2242 |
+
length. In this set of simulations and hereafter, we omit the
|
| 2243 |
+
results achieved by MR and only consider FPA with ν = 0
|
| 2244 |
+
and ω = α motivated by the previous results. Fig. 10 shows
|
| 2245 |
+
the CDFs of the sum SE per cell, for three different setups:
|
| 2246 |
+
|
| 2247 |
+
St
|
| 2248 |
+
4.5
|
| 2249 |
+
4uperpositionCoding3.5
|
| 2250 |
+
3
|
| 2251 |
+
2.5
|
| 2252 |
+
2
|
| 2253 |
+
RZF
|
| 2254 |
+
1.5
|
| 2255 |
+
1
|
| 2256 |
+
0.5
|
| 2257 |
+
MR
|
| 2258 |
+
0
|
| 2259 |
+
0.2
|
| 2260 |
+
0.4 0.5 0.6
|
| 2261 |
+
0.8
|
| 2262 |
+
0.95
|
| 2263 |
+
-1
|
| 2264 |
+
-0M-MMSE
|
| 2265 |
+
0.5
|
| 2266 |
+
1
|
| 2267 |
+
0
|
| 2268 |
+
.5
|
| 2269 |
+
VSuperposition Coding
|
| 2270 |
+
10.8
|
| 2271 |
+
M-MMS
|
| 2272 |
+
0.6
|
| 2273 |
+
MR
|
| 2274 |
+
0.4
|
| 2275 |
+
0.2
|
| 2276 |
+
0
|
| 2277 |
+
0.95
|
| 2278 |
+
0.8
|
| 2279 |
+
0.6
|
| 2280 |
+
0.5
|
| 2281 |
+
m
|
| 2282 |
+
0.4
|
| 2283 |
+
0.2
|
| 2284 |
+
1E
|
| 2285 |
+
RZF
|
| 2286 |
+
-1
|
| 2287 |
+
-0.5
|
| 2288 |
+
0
|
| 2289 |
+
0.5ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 2290 |
+
14
|
| 2291 |
+
-1
|
| 2292 |
+
-0.5
|
| 2293 |
+
0
|
| 2294 |
+
0.5
|
| 2295 |
+
1
|
| 2296 |
+
0.8
|
| 2297 |
+
0.85
|
| 2298 |
+
0.9
|
| 2299 |
+
0.95
|
| 2300 |
+
1
|
| 2301 |
+
Fig. 9.
|
| 2302 |
+
Network availability achieved by PUNC with FPA, for different
|
| 2303 |
+
precoding schemes and values of ν. Settings: K =20, α=0.2, au =10−0.5,
|
| 2304 |
+
τp =80, T =5, nd =100.
|
| 2305 |
+
0
|
| 2306 |
+
10
|
| 2307 |
+
20
|
| 2308 |
+
30
|
| 2309 |
+
40
|
| 2310 |
+
50
|
| 2311 |
+
60
|
| 2312 |
+
70
|
| 2313 |
+
80
|
| 2314 |
+
90
|
| 2315 |
+
0
|
| 2316 |
+
0.5
|
| 2317 |
+
1
|
| 2318 |
+
0
|
| 2319 |
+
10
|
| 2320 |
+
20
|
| 2321 |
+
30
|
| 2322 |
+
40
|
| 2323 |
+
50
|
| 2324 |
+
60
|
| 2325 |
+
70
|
| 2326 |
+
80
|
| 2327 |
+
90
|
| 2328 |
+
0
|
| 2329 |
+
0.5
|
| 2330 |
+
1
|
| 2331 |
+
0
|
| 2332 |
+
10
|
| 2333 |
+
20
|
| 2334 |
+
30
|
| 2335 |
+
40
|
| 2336 |
+
50
|
| 2337 |
+
60
|
| 2338 |
+
70
|
| 2339 |
+
80
|
| 2340 |
+
90
|
| 2341 |
+
0
|
| 2342 |
+
0.5
|
| 2343 |
+
1
|
| 2344 |
+
SPC
|
| 2345 |
+
SPC
|
| 2346 |
+
PUNC
|
| 2347 |
+
PUNC
|
| 2348 |
+
PUNC
|
| 2349 |
+
SPC
|
| 2350 |
+
Fig. 10.
|
| 2351 |
+
CDFs of the achievable downlink sum SE per cell, for different
|
| 2352 |
+
transmission and precoding strategies, as the number of slots per frame varies.
|
| 2353 |
+
Settings: FPA with ν = 0 and ω = 0.2, K = 20, α = 0.2, au = 10−0.5,
|
| 2354 |
+
τp =80.
|
| 2355 |
+
(i) nd =25, T =20, (ii) nd =50, T =10, and (iii) nd =100,
|
| 2356 |
+
T = 5. The structure of the TDD frame has not a significant
|
| 2357 |
+
impact on the SE of the eMBB users when SPC is used.
|
| 2358 |
+
Conversely, that deeply affects the per-cell sum SE in case
|
| 2359 |
+
of PUNC. Indeed, increasing the number of slots per frame
|
| 2360 |
+
makes the probability of having eMBB service outage smaller
|
| 2361 |
+
as it increases the opportunities for an eMBB user to find slots
|
| 2362 |
+
with no active URLLC users. This argument is supported by
|
| 2363 |
+
the results in Fig. 10 in which the eMBB service outage equals
|
| 2364 |
+
0.01, 0.0725 and 0.2875 when T = 20, T = 10 and T = 5,
|
| 2365 |
+
respectively. On the other hand, with fewer slots, eMBB users
|
| 2366 |
+
might be active for longer time, thereby experiencing higher
|
| 2367 |
+
SE. This explains the larger variations of the per-cell sum SE
|
| 2368 |
+
as T is decreased.
|
| 2369 |
+
The length of the slot directly affects the performance of
|
| 2370 |
+
the URLLC users. As we can see in Fig. 11, the network
|
| 2371 |
+
availability increases drastically with the length of the slot
|
| 2372 |
+
(i.e., the URLLC codeword length). In fact, the length of
|
| 2373 |
+
the URLLC codeword determines the transmission rate of the
|
| 2374 |
+
URLLC users as R=b/nd, thus the shorter the codeword the
|
| 2375 |
+
Fig. 11.
|
| 2376 |
+
Network availability, for different transmission and precoding
|
| 2377 |
+
strategies, as the length of the slot varies. Settings: FPA with ν = 0 and
|
| 2378 |
+
ω =0.2, K =20, α=0.2, au =10−0.5, τp =80.
|
| 2379 |
+
higher the rate requirement to be reliably achieved and, in turn,
|
| 2380 |
+
the larger the error probability.2 Again, SPC is the technique
|
| 2381 |
+
that overall guarantees the best performance to both the eMBB
|
| 2382 |
+
and URLLC users as its main limitation, namely the caused
|
| 2383 |
+
multi-user interference, is overcome by using interference-
|
| 2384 |
+
suppression-based precoding schemes. Lastly, although letting
|
| 2385 |
+
the URLLC transmissions span many channel uses is benefi-
|
| 2386 |
+
cial in terms of network availability, the latency requirements
|
| 2387 |
+
impose to localize the transmissions in time.
|
| 2388 |
+
Now, we move our focus on the impact of the pilot
|
| 2389 |
+
contamination and estimation overhead on the performance.
|
| 2390 |
+
By fixing the TDD frame length and the number of slots
|
| 2391 |
+
per frame, we vary the length of the uplink training, hence
|
| 2392 |
+
the number of available orthogonal pilots, and the length of
|
| 2393 |
+
each slot accordingly. In Fig. 12 we show how the average
|
| 2394 |
+
sum SE per cell evolves in different operating regimes with
|
| 2395 |
+
respect to the uplink training length. In these simulations,
|
| 2396 |
+
we assume K = 20, α = 0.2, τc = 580 and T = 5. Small
|
| 2397 |
+
values of τp entails low channel estimation overhead but high
|
| 2398 |
+
levels of pilot contamination which reduces the effectiveness
|
| 2399 |
+
of the precoding. Our pilot assignment scheme preserves the
|
| 2400 |
+
performance of the URLLC users by assigning them unique
|
| 2401 |
+
pilots if available, otherwise pilots are assigned randomly and
|
| 2402 |
+
contamination hits any user indiscriminately. The maximum
|
| 2403 |
+
number of URLLC users potentially active in this scenario
|
| 2404 |
+
is, according to the chosen parameter, 16. Hence, pilots are
|
| 2405 |
+
assigned randomly when τp = 10 causing both intra- and
|
| 2406 |
+
inter-cell pilot contamination and providing a low sum SE
|
| 2407 |
+
per cell, namely about 30 bit/s/Hz with SPC and less than 10
|
| 2408 |
+
bit/s/Hz with PUNC. The performance worsens when τp =20
|
| 2409 |
+
as the eMBB users have to share only 4 orthogonal pilots
|
| 2410 |
+
since the protection mechanism of the URLLC users is now
|
| 2411 |
+
triggered. As we increase the value of τp, the intra-cell pilot
|
| 2412 |
+
2The random-coding union bound in (18) defines the error probability as
|
| 2413 |
+
the probability that the average generalized information density is smaller than
|
| 2414 |
+
the transmission rate requirement.
|
| 2415 |
+
|
| 2416 |
+
0.8Network Availability,
|
| 2417 |
+
0.6
|
| 2418 |
+
0.4
|
| 2419 |
+
0.2
|
| 2420 |
+
0
|
| 2421 |
+
RZF/SPC
|
| 2422 |
+
M-MMSE/SPC
|
| 2423 |
+
nd=25, T=20
|
| 2424 |
+
RZF/PU
|
| 2425 |
+
nd=50,T=10
|
| 2426 |
+
nd=100, T=5NC
|
| 2427 |
+
M-MMSE/PUNCON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 2428 |
+
15
|
| 2429 |
+
Fig. 12.
|
| 2430 |
+
Average SE per cell (with 95% confidence interval), for different
|
| 2431 |
+
transmission and precoding strategies, as τp (and nd) varies. The average is
|
| 2432 |
+
taken over 200 network snapshots. Settings: FPA with ν = 0 and ω = 0.2,
|
| 2433 |
+
K =20, α=0.2, au =10−0.5, τc =580, T =5.
|
| 2434 |
+
contamination is primarily reduced by assigning orthogonal
|
| 2435 |
+
pilots to eMBB users of the same cell. If τp ≥32 then intra-cell
|
| 2436 |
+
pilot contamination is prevented and the inter-cell interference
|
| 2437 |
+
among the eMBB users remains the only impairment. The
|
| 2438 |
+
sum SE per cell keep growing up to τp = 80, when all
|
| 2439 |
+
the users in the network are assigned mutual orthogonal
|
| 2440 |
+
pilots and the benefits of having no pilot contamination at all
|
| 2441 |
+
overcome the penalty from increasing the estimation overhead.
|
| 2442 |
+
Trivially, there are no benefits in the channel estimation when
|
| 2443 |
+
further increasing τp, while the estimation overhead turns to
|
| 2444 |
+
be expensive and drastically lowers the sum SE per cell.
|
| 2445 |
+
Finally, notice that RZF and M-MMSE provide essentially
|
| 2446 |
+
the same performance when both the intra- and inter-cell pilot
|
| 2447 |
+
contamination occur, because the ability of suppressing the
|
| 2448 |
+
multi-user interference is poor for both the schemes.
|
| 2449 |
+
As per the URLLC users, pilot contamination heavily affects
|
| 2450 |
+
the network availability when τp <16, especially when SPC is
|
| 2451 |
+
employed and despite a long slot lowers the rate requirements,
|
| 2452 |
+
as we can observe in Fig. 13. Pilot contamination among
|
| 2453 |
+
URLLC users is destructive mainly because they are likely to
|
| 2454 |
+
be close to the BS and to each other, experiencing strong inter-
|
| 2455 |
+
ference that cannot be resolved when their channel estimates
|
| 2456 |
+
are correlated. Hence, our approach aiming at prioritizing the
|
| 2457 |
+
URLLC users in the pilot assignment is technically sound. In
|
| 2458 |
+
addition, increasing the estimation overhead deeply penalizes
|
| 2459 |
+
the network availability since more resources are subtracted to
|
| 2460 |
+
the data transmission, namely the slot length reduces and, as
|
| 2461 |
+
already explained earlier, the rate requirements of the URLLC
|
| 2462 |
+
users increase.
|
| 2463 |
+
Next we study how the performance are affected by the
|
| 2464 |
+
random activation pattern and the number of potentially active
|
| 2465 |
+
URLLC users per frame. Fig. 14 shows the average sum SE
|
| 2466 |
+
per cell as au and α vary, assuming different transmission
|
| 2467 |
+
and precoding schemes, and FPA with ν = 0 and ω = α.
|
| 2468 |
+
Notice that, proportionally increasing ω to α is a reasonable
|
| 2469 |
+
Fig. 13.
|
| 2470 |
+
Network availability, for different transmission and precoding
|
| 2471 |
+
strategies, as τp (and nd) varies. Settings: FPA with ν = 0 and ω = 0.2,
|
| 2472 |
+
K =20, α=0.2, au =10−0.5, τc =580, T =5.
|
| 2473 |
+
Fig. 14.
|
| 2474 |
+
Average SE per cell, for different transmission and precoding
|
| 2475 |
+
strategies, as au and α vary. The average is taken over 200 network snapshots.
|
| 2476 |
+
Settings: FPA with ν = 0 and ω = α, K = 20, τc = 580, f = 4, T = 5,
|
| 2477 |
+
nd =100.
|
| 2478 |
+
approach for SPC as more power is allocated to an increasing
|
| 2479 |
+
number of potentially active URLLC users, especially for
|
| 2480 |
+
large values of au. In these simulations, we assume two TDD
|
| 2481 |
+
frame configurations: (i) f = 4, T = 5, nd = 100, and (ii)
|
| 2482 |
+
f = 3, T = 8, nd = 65 (whose results are instead shown
|
| 2483 |
+
in Fig. 15). First, we observe that similar average sum SE per
|
| 2484 |
+
cell can be achieved by adopting the considered TDD frame
|
| 2485 |
+
configurations: pilot contamination is what slightly degrades
|
| 2486 |
+
the performance of the eMBB users when using the second
|
| 2487 |
+
frame configuration. The performance of PUNC converges to
|
| 2488 |
+
that of SPC when au ≥ 10−2, hence for sparse activation
|
| 2489 |
+
patterns, as expected. Again, the performance gap between
|
| 2490 |
+
RZF and M-MMSE reduces in the second scenario (Fig. 15)
|
| 2491 |
+
as the inter-cell pilot contamination decreases the ability of
|
| 2492 |
+
M-MMSE in suppressing the multi-user interference. PUNC
|
| 2493 |
+
provides eMBB service outage for large values of au, whereas
|
| 2494 |
+
|
| 2495 |
+
70
|
| 2496 |
+
--.--RZF
|
| 2497 |
+
ZH
|
| 2498 |
+
I--- M-MMSE
|
| 2499 |
+
60
|
| 2500 |
+
[bit/s/
|
| 2501 |
+
everyone
|
| 2502 |
+
contam.
|
| 2503 |
+
50
|
| 2504 |
+
ellSPCper
|
| 2505 |
+
40
|
| 2506 |
+
inter-and intra-cell pilot contar
|
| 2507 |
+
intra-cell pilo
|
| 2508 |
+
inter-cellpilot
|
| 2509 |
+
SE
|
| 2510 |
+
contam.
|
| 2511 |
+
30
|
| 2512 |
+
eMBB users
|
| 2513 |
+
eMBB users
|
| 2514 |
+
20
|
| 2515 |
+
inter- and
|
| 2516 |
+
10
|
| 2517 |
+
PL
|
| 2518 |
+
0
|
| 2519 |
+
114
|
| 2520 |
+
112
|
| 2521 |
+
110
|
| 2522 |
+
108
|
| 2523 |
+
104
|
| 2524 |
+
100
|
| 2525 |
+
10
|
| 2526 |
+
T
|
| 2527 |
+
20
|
| 2528 |
+
30
|
| 2529 |
+
40
|
| 2530 |
+
60
|
| 2531 |
+
80
|
| 2532 |
+
pno pilot
|
| 2533 |
+
contamination
|
| 2534 |
+
JNC
|
| 2535 |
+
96
|
| 2536 |
+
86
|
| 2537 |
+
76
|
| 2538 |
+
66
|
| 2539 |
+
56
|
| 2540 |
+
100
|
| 2541 |
+
150
|
| 2542 |
+
200
|
| 2543 |
+
250
|
| 2544 |
+
300AL
|
| 2545 |
+
PUNC
|
| 2546 |
+
tam.everyone
|
| 2547 |
+
0.95
|
| 2548 |
+
SPC
|
| 2549 |
+
ility,
|
| 2550 |
+
....Network Availal
|
| 2551 |
+
inter-and intra-cell pilot co
|
| 2552 |
+
inter-and intra-cell p
|
| 2553 |
+
contam. eMBB users
|
| 2554 |
+
inter-cell pilot
|
| 2555 |
+
contam.
|
| 2556 |
+
0.85
|
| 2557 |
+
eMBB users
|
| 2558 |
+
0.8
|
| 2559 |
+
0.75
|
| 2560 |
+
114
|
| 2561 |
+
112
|
| 2562 |
+
110
|
| 2563 |
+
108
|
| 2564 |
+
104
|
| 2565 |
+
100
|
| 2566 |
+
10
|
| 2567 |
+
20
|
| 2568 |
+
30
|
| 2569 |
+
40
|
| 2570 |
+
60
|
| 2571 |
+
80
|
| 2572 |
+
KQno pilot
|
| 2573 |
+
contamination
|
| 2574 |
+
-.----RZF
|
| 2575 |
+
------ M-MMSE
|
| 2576 |
+
96
|
| 2577 |
+
86
|
| 2578 |
+
76
|
| 2579 |
+
66
|
| 2580 |
+
56
|
| 2581 |
+
100
|
| 2582 |
+
150
|
| 2583 |
+
200
|
| 2584 |
+
250
|
| 2585 |
+
300f-4,T=5,
|
| 2586 |
+
SPC
|
| 2587 |
+
70
|
| 2588 |
+
M-MMSIna = 100bit/s/Hz
|
| 2589 |
+
60
|
| 2590 |
+
50
|
| 2591 |
+
SE per cell [
|
| 2592 |
+
40
|
| 2593 |
+
30
|
| 2594 |
+
20
|
| 2595 |
+
10
|
| 2596 |
+
PUNC
|
| 2597 |
+
0
|
| 2598 |
+
100
|
| 2599 |
+
10-1
|
| 2600 |
+
10-2
|
| 2601 |
+
10-3
|
| 2602 |
+
10-4RZF
|
| 2603 |
+
0.8
|
| 2604 |
+
0.6
|
| 2605 |
+
0.4
|
| 2606 |
+
0.2ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 2607 |
+
16
|
| 2608 |
+
Fig. 15.
|
| 2609 |
+
Average SE per cell, for different transmission and precoding
|
| 2610 |
+
strategies, as au and α vary. The average is taken over 200 network snapshots.
|
| 2611 |
+
Settings: FPA with ν = 0 and ω = α, K = 20, τc = 580, f = 3, T = 8,
|
| 2612 |
+
nd =65.
|
| 2613 |
+
Fig. 16.
|
| 2614 |
+
Network availability, for different precoding strategies, as au and
|
| 2615 |
+
α vary. The average is taken over 200 network snapshots. Settings: SPC and
|
| 2616 |
+
FPA with ν =0 and ω =α, K =20, τc =580. Two TDD frame configurations
|
| 2617 |
+
are considered.
|
| 2618 |
+
SPC is still able to cancel the URLLC user interference and to
|
| 2619 |
+
provide excellent SEs. Lastly, we observe that if the 80% of
|
| 2620 |
+
the users requests URLLC, then the performance of the eMBB
|
| 2621 |
+
users is reduced of almost one third with respect to the case
|
| 2622 |
+
α=0.2. This result is mainly due to the chosen value of ω in
|
| 2623 |
+
the FPA scheme that aims to favor the URLLC performance
|
| 2624 |
+
as the number of URLLC users increases.
|
| 2625 |
+
The performance achieved by the two considered TDD
|
| 2626 |
+
frame configurations appreciably differ in terms of network
|
| 2627 |
+
availability as shown in Fig. 16 for SPC and Fig. 17 for PUNC.
|
| 2628 |
+
In both cases, reducing the length of the slot leads to about
|
| 2629 |
+
a 10% performance loss, while the pilot contamination only
|
| 2630 |
+
concerns the eMBB users. This performance gap is slightly
|
| 2631 |
+
more pronounced when using PUNC because the entire BS
|
| 2632 |
+
power is distributed among the URLLC users causing stronger
|
| 2633 |
+
mutual interference. Overall, the first TDD frame configuration
|
| 2634 |
+
turns to be quite robust to any of the considered transmission
|
| 2635 |
+
Fig. 17.
|
| 2636 |
+
Network availability, for different precoding strategies, as au and
|
| 2637 |
+
α vary. Settings: PUNC and FPA with ν =0 and ω =α, K =20, τc =580.
|
| 2638 |
+
Two TDD frame configurations are considered.
|
| 2639 |
+
Fig. 18.
|
| 2640 |
+
eMBB service outage, for different precoding strategies, as au and
|
| 2641 |
+
α vary. Settings: PUNC and FPA with ν =0 and ω =α, K =20, τc =580.
|
| 2642 |
+
Two TDD frame configurations are considered.
|
| 2643 |
+
and precoding strategies, considered random URLLC activa-
|
| 2644 |
+
tion pattern and URLLC user load.
|
| 2645 |
+
A final aspect to be analyzed for this set of simulations
|
| 2646 |
+
is how the probability of eMBB service outage varies with
|
| 2647 |
+
au and α when PUNC is adopted. This would complete
|
| 2648 |
+
the picture on which operating points PUNC is an effective
|
| 2649 |
+
choice for the eMBB users too, and importantly, further remark
|
| 2650 |
+
the relevance of properly structuring the TDD frame. As
|
| 2651 |
+
we can see in Fig. 18, the advantage of adopting the TDD
|
| 2652 |
+
frame configuration with T = 8 slots, when using PUNC,
|
| 2653 |
+
consists in better preventing the eMBB service outage than the
|
| 2654 |
+
configuration with T = 5. For instance, when au = 10−1 and
|
| 2655 |
+
α=0.8 or α=0.6, partitioning the share of the frame devoted
|
| 2656 |
+
to the data transmission in 8 slots enables to halve the eMBB
|
| 2657 |
+
outage service compared to the case where 5 slots are adopted.
|
| 2658 |
+
Overall, PUNC can compete with SPC only in scenarios with
|
| 2659 |
+
low URLLC traffic loads, upon properly structuring the TDD
|
| 2660 |
+
frame, as long as a moderate eMBB performance loss is
|
| 2661 |
+
tolerated, either in terms of sum SE per cell or of eMBB
|
| 2662 |
+
service outage. On the other hand, SPC hinges on precoding
|
| 2663 |
+
schemes able to suppress the multi-user interference which, in
|
| 2664 |
+
turn, leverages the spatial degrees of freedom available at the
|
| 2665 |
+
|
| 2666 |
+
f-3,T= 8
|
| 2667 |
+
70
|
| 2668 |
+
SPCnd = 65[bit/s/Hz
|
| 2669 |
+
V-
|
| 2670 |
+
60
|
| 2671 |
+
50
|
| 2672 |
+
SE per cell [
|
| 2673 |
+
40
|
| 2674 |
+
30
|
| 2675 |
+
20
|
| 2676 |
+
10
|
| 2677 |
+
PUNC
|
| 2678 |
+
0
|
| 2679 |
+
100
|
| 2680 |
+
10~1
|
| 2681 |
+
au
|
| 2682 |
+
10-2
|
| 2683 |
+
10-3
|
| 2684 |
+
10-4RZF
|
| 2685 |
+
0.8
|
| 2686 |
+
0.6
|
| 2687 |
+
0.4
|
| 2688 |
+
0.2Superposition Coding0.98
|
| 2689 |
+
Availability,
|
| 2690 |
+
0.96
|
| 2691 |
+
0.94
|
| 2692 |
+
M-MMSE
|
| 2693 |
+
0.92
|
| 2694 |
+
Network
|
| 2695 |
+
0.9
|
| 2696 |
+
0.88
|
| 2697 |
+
0.86
|
| 2698 |
+
RZF
|
| 2699 |
+
100.0
|
| 2700 |
+
10-0.1
|
| 2701 |
+
f =3, T =8, nd=6
|
| 2702 |
+
10-0.5
|
| 2703 |
+
10-1.0
|
| 2704 |
+
0.24, T = 5, nd = 100
|
| 2705 |
+
0.8
|
| 2706 |
+
0.6
|
| 2707 |
+
0.4
|
| 2708 |
+
3=0Puncturing0.98
|
| 2709 |
+
Network Availability,
|
| 2710 |
+
0.96
|
| 2711 |
+
f
|
| 2712 |
+
0.94
|
| 2713 |
+
0.92
|
| 2714 |
+
0.9
|
| 2715 |
+
f = 3,T - 8,nd = 65
|
| 2716 |
+
0.88
|
| 2717 |
+
0.86
|
| 2718 |
+
RZF
|
| 2719 |
+
100.0
|
| 2720 |
+
10-0.1
|
| 2721 |
+
10-0.2
|
| 2722 |
+
10-0.5
|
| 2723 |
+
10-1.0
|
| 2724 |
+
0.2, T = 5,nd = 100
|
| 2725 |
+
M-MMSE
|
| 2726 |
+
0.8
|
| 2727 |
+
0.6
|
| 2728 |
+
0.4
|
| 2729 |
+
α=wf =3,T =eMBB service outage
|
| 2730 |
+
eMBB service outage
|
| 2731 |
+
0.8
|
| 2732 |
+
0.8
|
| 2733 |
+
0.6
|
| 2734 |
+
0.6
|
| 2735 |
+
0.4
|
| 2736 |
+
0.2
|
| 2737 |
+
0.2
|
| 2738 |
+
0
|
| 2739 |
+
0
|
| 2740 |
+
10-1.0
|
| 2741 |
+
du
|
| 2742 |
+
10-2.0
|
| 2743 |
+
0.8
|
| 2744 |
+
0.8
|
| 2745 |
+
0.6
|
| 2746 |
+
0.6
|
| 2747 |
+
10-3.0
|
| 2748 |
+
0.4
|
| 2749 |
+
0.4
|
| 2750 |
+
10-4.0
|
| 2751 |
+
0.2
|
| 2752 |
+
310-0.2
|
| 2753 |
+
10-0.5
|
| 2754 |
+
10-1.0
|
| 2755 |
+
du
|
| 2756 |
+
10-3.0
|
| 2757 |
+
10-4.0
|
| 2758 |
+
0.2ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 2759 |
+
17
|
| 2760 |
+
10
|
| 2761 |
+
20
|
| 2762 |
+
30
|
| 2763 |
+
40
|
| 2764 |
+
50
|
| 2765 |
+
60
|
| 2766 |
+
0
|
| 2767 |
+
20
|
| 2768 |
+
40
|
| 2769 |
+
60
|
| 2770 |
+
80
|
| 2771 |
+
100
|
| 2772 |
+
120
|
| 2773 |
+
SPC
|
| 2774 |
+
SPC
|
| 2775 |
+
PUNC
|
| 2776 |
+
PUNC
|
| 2777 |
+
Fig. 19.
|
| 2778 |
+
Average SE per cell (with 95% confidence interval), for different
|
| 2779 |
+
transmission and precoding strategies, as K and τc vary. The average is taken
|
| 2780 |
+
over 200 network snapshots. Settings: FPA with ν =0 and ω =0.2, α=0.2,
|
| 2781 |
+
au =10−1, f =3, T =5.
|
| 2782 |
+
BS and the high accuracy of the acquired CSI.
|
| 2783 |
+
Finally, we evaluate the performance varying the total
|
| 2784 |
+
number of users and the TDD frame length. Fig. 19 shows
|
| 2785 |
+
the average sum SE per cell, for different transmission and
|
| 2786 |
+
precoding strategies, as the number of users per cell, K, grows
|
| 2787 |
+
from 10 to 60, and considering two different TDD frame
|
| 2788 |
+
lengths, namely 580 and 300 channel uses. The latter may
|
| 2789 |
+
support a shorter coherence time and a narrower coherence
|
| 2790 |
+
bandwidth as well as a higher user mobility compared to the
|
| 2791 |
+
case with 580 channel uses. However, a shorter frame entails
|
| 2792 |
+
less resources that can be allocated to the data transmission
|
| 2793 |
+
and uplink training. In these simulations we assume FPA with
|
| 2794 |
+
ν = 0 and ω = 0.2, α = 0.2, au = 10−1, T = 5 and pilot reuse
|
| 2795 |
+
factor f = 3. Moreover, as τp = fK and τc is fixed, for each
|
| 2796 |
+
value of K we have different configurations of uplink training
|
| 2797 |
+
and slot length, i.e., τp and nd, respectively. From Fig. 19
|
| 2798 |
+
we observe the average sum SE per cell increasing with K,
|
| 2799 |
+
which demonstrates the great ability of SPC with M-MMSE
|
| 2800 |
+
and RZF to spatially multiplex the users. The average sum SE
|
| 2801 |
+
per cell saturates for values of K larger than 60 for τc =580,
|
| 2802 |
+
and around 40 for τc = 300 wherein the channel estimation
|
| 2803 |
+
overhead heavily burden the SE. PUNC is far inferior to
|
| 2804 |
+
SPC because allocates less resources to the eMBB users and
|
| 2805 |
+
the performance gap increases with K as the number of
|
| 2806 |
+
URLLC users per cell grows proportionally. Therefore, letting
|
| 2807 |
+
K increase makes punctured slots more likely, which not only
|
| 2808 |
+
subtracts resources to the eMBB user reducing its SE but also
|
| 2809 |
+
increases the eMBB service outage, as shown in Table III.
|
| 2810 |
+
Notice that, the eMBB service outage does not change when
|
| 2811 |
+
varying τc as long as T is fixed.
|
| 2812 |
+
Table III and Table IV show the network availability for
|
| 2813 |
+
different transmission and precoding strategies, and different
|
| 2814 |
+
values of K, also emphasizing how τp and nd vary accordingly
|
| 2815 |
+
to meet the TDD frame length. In particular, Table III shows
|
| 2816 |
+
the performance achieved by considering τc = 580, while
|
| 2817 |
+
TABLE III
|
| 2818 |
+
NETWORK AVAILABILITY AND EMBB SERVICE OUTAGE, τc =580
|
| 2819 |
+
K
|
| 2820 |
+
τp
|
| 2821 |
+
nd
|
| 2822 |
+
ηdl
|
| 2823 |
+
ς out
|
| 2824 |
+
SPC
|
| 2825 |
+
PUNC
|
| 2826 |
+
PUNC
|
| 2827 |
+
M-MMSE
|
| 2828 |
+
RZF
|
| 2829 |
+
M-MMSE
|
| 2830 |
+
RZF
|
| 2831 |
+
10
|
| 2832 |
+
30
|
| 2833 |
+
110
|
| 2834 |
+
0.9989
|
| 2835 |
+
0.9966
|
| 2836 |
+
1
|
| 2837 |
+
0.9989
|
| 2838 |
+
0.0012
|
| 2839 |
+
20
|
| 2840 |
+
60
|
| 2841 |
+
104
|
| 2842 |
+
0.9988
|
| 2843 |
+
0.9957
|
| 2844 |
+
0.9944
|
| 2845 |
+
0.9906
|
| 2846 |
+
0.0038
|
| 2847 |
+
30
|
| 2848 |
+
90
|
| 2849 |
+
98
|
| 2850 |
+
0.9988
|
| 2851 |
+
0.9950
|
| 2852 |
+
0.9934
|
| 2853 |
+
0.9893
|
| 2854 |
+
0.0225
|
| 2855 |
+
40
|
| 2856 |
+
120
|
| 2857 |
+
92
|
| 2858 |
+
0.9969
|
| 2859 |
+
0.9885
|
| 2860 |
+
0.9881
|
| 2861 |
+
0.9819
|
| 2862 |
+
0.0625
|
| 2863 |
+
50
|
| 2864 |
+
150
|
| 2865 |
+
86
|
| 2866 |
+
0.9864
|
| 2867 |
+
0.9787
|
| 2868 |
+
0.9790
|
| 2869 |
+
0.9672
|
| 2870 |
+
0.1050
|
| 2871 |
+
60
|
| 2872 |
+
180
|
| 2873 |
+
80
|
| 2874 |
+
0.9807
|
| 2875 |
+
0.9697
|
| 2876 |
+
0.9728
|
| 2877 |
+
0.9601
|
| 2878 |
+
0.1737
|
| 2879 |
+
TABLE IV
|
| 2880 |
+
NETWORK AVAILABILITY AND EMBB SERVICE OUTAGE, τc =300
|
| 2881 |
+
K
|
| 2882 |
+
τp
|
| 2883 |
+
nd
|
| 2884 |
+
ηdl
|
| 2885 |
+
ς out
|
| 2886 |
+
SPC
|
| 2887 |
+
PUNC
|
| 2888 |
+
PUNC
|
| 2889 |
+
M-MMSE
|
| 2890 |
+
RZF
|
| 2891 |
+
M-MMSE
|
| 2892 |
+
RZF
|
| 2893 |
+
10
|
| 2894 |
+
30
|
| 2895 |
+
54
|
| 2896 |
+
0.7936
|
| 2897 |
+
0.7683
|
| 2898 |
+
0.7844
|
| 2899 |
+
0.7534
|
| 2900 |
+
0.0012
|
| 2901 |
+
20
|
| 2902 |
+
60
|
| 2903 |
+
48
|
| 2904 |
+
0.6786
|
| 2905 |
+
0.6353
|
| 2906 |
+
0.6905
|
| 2907 |
+
0.6685
|
| 2908 |
+
0.0038
|
| 2909 |
+
30
|
| 2910 |
+
90
|
| 2911 |
+
42
|
| 2912 |
+
0.4796
|
| 2913 |
+
0.4296
|
| 2914 |
+
0.5646
|
| 2915 |
+
0.5435
|
| 2916 |
+
0.0225
|
| 2917 |
+
40
|
| 2918 |
+
120
|
| 2919 |
+
36
|
| 2920 |
+
0.1813
|
| 2921 |
+
0.1457
|
| 2922 |
+
0.3192
|
| 2923 |
+
0.3192
|
| 2924 |
+
0.0625
|
| 2925 |
+
50
|
| 2926 |
+
150
|
| 2927 |
+
30
|
| 2928 |
+
0.0021
|
| 2929 |
+
0
|
| 2930 |
+
0.0250
|
| 2931 |
+
0.0250
|
| 2932 |
+
0.1050
|
| 2933 |
+
60
|
| 2934 |
+
180
|
| 2935 |
+
24
|
| 2936 |
+
0
|
| 2937 |
+
0
|
| 2938 |
+
0
|
| 2939 |
+
0
|
| 2940 |
+
0.1737
|
| 2941 |
+
Table IV shows the performance achieved with τc = 300.
|
| 2942 |
+
The TDD frame with τc = 580 allows to achieve a network
|
| 2943 |
+
availability above 96% up to 60 users per cell (of which 12
|
| 2944 |
+
are URLLC users) with any of the considered transmission
|
| 2945 |
+
and precoding techniques, meaning that such an amount of
|
| 2946 |
+
resources are sufficient to excellently support the considered
|
| 2947 |
+
URLLC user loads and their activation pattern. Conversely,
|
| 2948 |
+
the network availability supported by the TDD frame with
|
| 2949 |
+
τc = 300, reported in Table IV, is considerably lower, even
|
| 2950 |
+
close (or equal) to zero for K ≥50, emphasizing how sensitive
|
| 2951 |
+
the network availability is to the length of the TDD frame,
|
| 2952 |
+
hence to the amount of available resources. Importantly, we
|
| 2953 |
+
observe the decreasing trend of the network availability as
|
| 2954 |
+
K increases, which for PUNC is milder and mainly due to
|
| 2955 |
+
the shorter URLLC codeword length, but for SPC is severe
|
| 2956 |
+
and mainly due to the increase of the multi-user interference.
|
| 2957 |
+
Indeed, the results in Table IV clearly confirms that PUNC is
|
| 2958 |
+
more robust than SPC when K ≥20.
|
| 2959 |
+
VI. CONCLUSION
|
| 2960 |
+
In this paper, we considered the non-orthogonal multiplex-
|
| 2961 |
+
ing of heterogeneous services, namely the enhanced mobile
|
| 2962 |
+
broadband (eMBB) and the ultra-reliable low-latency commu-
|
| 2963 |
+
nication (URLLC), in the downlink of a multi-cell massive
|
| 2964 |
+
MIMO system. eMBB and URLLC have opposite characteris-
|
| 2965 |
+
tics and diverse requirements. eMBB transmissions involve a
|
| 2966 |
+
large payload that spans multiple radio frames, and demand for
|
| 2967 |
+
high spectral efficiency. While, URLLC users intermittently
|
| 2968 |
+
transmit small payloads in a very short time demanding for
|
| 2969 |
+
low latency and successful probability in the order of 10−5.
|
| 2970 |
+
Such a heterogeneity calls for effective resource allocation
|
| 2971 |
+
strategies to let eMBB and URLLC peacefully coexist. Firstly,
|
| 2972 |
+
we provided a unified information-theoretic framework to
|
| 2973 |
+
assess the spectral efficiency (SE) of the eMBB in the infinite-
|
| 2974 |
+
blocklength ergodic regime, and the error probability of the
|
| 2975 |
+
URLLC in the nonasymptotic finite-blocklength regime. Both
|
| 2976 |
+
|
| 2977 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 2978 |
+
18
|
| 2979 |
+
analyses encompass imperfect channel state information (CSI)
|
| 2980 |
+
acquisition at the base stations (BSs) via uplink pilot trans-
|
| 2981 |
+
missions, pilot contamination and pilot overhead, spatially
|
| 2982 |
+
correlated channels and the lack of CSI at the users. Secondly,
|
| 2983 |
+
we generalized the proposed framework to accommodate two
|
| 2984 |
+
alternative coexistence strategies: puncturing (PUNC) and
|
| 2985 |
+
superposition coding (SPC). The former prevents the inter-
|
| 2986 |
+
service interference aiming to protect the URLLC reliability,
|
| 2987 |
+
while the latter accepts it aiming to maintain the eMBB
|
| 2988 |
+
service. Thirdly, we numerically evaluated the performance
|
| 2989 |
+
achieved by PUNC and SPC under different precoding and
|
| 2990 |
+
power allocation schemes, and subject to different configu-
|
| 2991 |
+
rations of the time-division duplex radio frame and URLLC
|
| 2992 |
+
random activation pattern. Simulation results revealed that
|
| 2993 |
+
the spatial degrees of freedom available at the BSs, when
|
| 2994 |
+
fully exploited by interference-suppression-based precoding
|
| 2995 |
+
schemes, and upon a high-quality CSI acquisition, enable to
|
| 2996 |
+
significantly resolve the multi-user interference caused by the
|
| 2997 |
+
SPC operation, providing way higher eMBB SE than PUNC,
|
| 2998 |
+
yet ensuring similar great levels of error probability for the
|
| 2999 |
+
URLLC. However, whenever these conditions does not hold,
|
| 3000 |
+
e.g., when a severe pilot contamination degrades the channel
|
| 3001 |
+
estimates or the degrees of freedom are not sufficient to
|
| 3002 |
+
handle the interference between many users, PUNC turns to
|
| 3003 |
+
be a necessary operation to preserve the URLLC performance,
|
| 3004 |
+
although it might cause eMBB service outage. Unlike prior
|
| 3005 |
+
works wherein the URLLC performance is inappropriately
|
| 3006 |
+
assessed by using the outage capacity analysis or the error
|
| 3007 |
+
probability obtained by the normal approximation, in this work
|
| 3008 |
+
the finite-blocklength information-theoretic analysis relies on
|
| 3009 |
+
mismatched receivers and on the saddlepoint approximation
|
| 3010 |
+
which is proper of URLLC scenarios in massive MIMO
|
| 3011 |
+
operation. This work can be extended by including mas-
|
| 3012 |
+
sive machine-type communication (mMTC) in the coexis-
|
| 3013 |
+
tence strategies, and by including the study of the uplink
|
| 3014 |
+
in the proposed generalized framework. Finally, investigating
|
| 3015 |
+
the non-orthogonal multiplexing of heterogeneous services
|
| 3016 |
+
in distributed user-centric systems, such as cell-free massive
|
| 3017 |
+
MIMO [34]–[36], able to provide user’s proximity, macrodi-
|
| 3018 |
+
versity and ubiquitous connectivity, is certainly an appealing
|
| 3019 |
+
future research direction.
|
| 3020 |
+
REFERENCES
|
| 3021 |
+
[1] IMT Vision – Framework and overall objectives of the future develop-
|
| 3022 |
+
ment of IMT for 2020 and beyond, ITU-R Std. M.2083-0, 2015.
|
| 3023 |
+
[2] P. Popovski, K. F. Trillingsgaard, O. Simeone, and G. Durisi, “5G wire-
|
| 3024 |
+
less network slicing for eMBB, URLLC, and mMTC: A communication-
|
| 3025 |
+
theoretic view,” IEEE Access, vol. 6, pp. 55 765–55 779, 2018.
|
| 3026 |
+
[3] T. L. Marzetta, “Noncooperative cellular wireless with unlimited num-
|
| 3027 |
+
bers of base station antennas,” IEEE Trans. Wireless Commun., vol. 9,
|
| 3028 |
+
no. 11, pp. 3590–3600, 2010.
|
| 3029 |
+
[4] T. L. Marzetta, E. G. Larsson, H. Yang, and H. Q. Ngo, Fundamentals
|
| 3030 |
+
of Massive MIMO.
|
| 3031 |
+
Cambridge University Press, 2016.
|
| 3032 |
+
[5] E. Bj¨ornson, J. Hoydis, and L. Sanguinetti, “Massive MIMO networks:
|
| 3033 |
+
Spectral, energy, and hardware efficiency,” Foundations and Trends® in
|
| 3034 |
+
Signal Processing, vol. 11, no. 3-4, pp. 154–655, 2017.
|
| 3035 |
+
[6] P. Popovski, J. J. Nielsen, ˇC. Stefanovi´c, E. d. Carvalho, E. Str¨om, K. F.
|
| 3036 |
+
Trillingsgaard, A. Bana, D. M. Kim, R. Kotaba, J. Park, and R. B.
|
| 3037 |
+
Sørensen, “Wireless access for ultra-reliable low-latency communica-
|
| 3038 |
+
tion: Principles and building blocks,” IEEE Network, vol. 32, no. 2, pp.
|
| 3039 |
+
16–23, Mar. 2018.
|
| 3040 |
+
[7] A.-S. Bana, E. de Carvalho, B. Soret, T. Abr˜ao, J. C. Marinello,
|
| 3041 |
+
E. G. Larsson, and P. Popovski, “Massive MIMO for internet of
|
| 3042 |
+
things (IoT) connectivity,” Physical Communication, vol. 37, p. 100859,
|
| 3043 |
+
2019. [Online]. Available: http://www.sciencedirect.com/science/article/
|
| 3044 |
+
pii/S1874490719303891
|
| 3045 |
+
[8] J. ¨Ostman, A. Lancho, G. Durisi, and L. Sanguinetti, “URLLC with
|
| 3046 |
+
massive MIMO: Analysis and design at finite blocklength,” IEEE Trans.
|
| 3047 |
+
Wireless Commun., vol. 20, no. 10, pp. 6387–6401, Oct. 2021.
|
| 3048 |
+
[9] E. Bj¨ornson, E. de Carvalho, J. H. Sørensen, E. G. Larsson, and
|
| 3049 |
+
P. Popovski, “A random access protocol for pilot allocation in crowded
|
| 3050 |
+
massive MIMO systems,” IEEE Transactions on Wireless Communica-
|
| 3051 |
+
tions, vol. 16, no. 4, pp. 2220–2234, Apr. 2017.
|
| 3052 |
+
[10] A. Anand, G. De Veciana, and S. Shakkottai, “Joint scheduling of
|
| 3053 |
+
URLLC and eMBB traffic in 5G wireless networks,” in Proc. of IEEE
|
| 3054 |
+
Conference on Computer Communications (INFOCOM), Apr. 2018, pp.
|
| 3055 |
+
1970–1978.
|
| 3056 |
+
[11] R. Kassab, O. Simeone, P. Popovski, and T. Islam, “Non-orthogonal
|
| 3057 |
+
multiplexing of ultra-reliable and broadband services in fog-radio archi-
|
| 3058 |
+
tectures,” IEEE Access, vol. 7, pp. 13 035–13 049, 2019.
|
| 3059 |
+
[12] A. A. Esswie and K. I. Pedersen, “Opportunistic spatial preemptive
|
| 3060 |
+
scheduling for URLLC and eMBB coexistence in multi-user 5G net-
|
| 3061 |
+
works,” IEEE Access, vol. 6, pp. 38 451–38 463, 2018.
|
| 3062 |
+
[13] S. F. Abedin, M. G. R. Alam, S. M. A. Kazmi, N. H. Tran, D. Niyato,
|
| 3063 |
+
and C. S. Hong, “Resource allocation for ultra-reliable and enhanced
|
| 3064 |
+
mobile broadband IoT applications in fog network,” IEEE Transactions
|
| 3065 |
+
on Communications, vol. 67, no. 1, pp. 489–502, Jan. 2019.
|
| 3066 |
+
[14] A. Matera, R. Kassab, O. Simeone, and U. Spagnolini, “Non-orthogonal
|
| 3067 |
+
eMBB-URLLC radio access for cloud radio access networks with analog
|
| 3068 |
+
fronthauling,” Entropy, vol. 20, no. 9, 2018.
|
| 3069 |
+
[15] M. Alsenwi, N. H. Tran, M. Bennis, A. Kumar Bairagi, and C. S.
|
| 3070 |
+
Hong, “eMBB-URLLC resource slicing: A risk-sensitive approach,”
|
| 3071 |
+
IEEE Communications Letters, vol. 23, no. 4, pp. 740–743, Apr. 2019.
|
| 3072 |
+
[16] R. Abreu, T. Jacobsen, G. Berardinelli, K. Pedersen, N. H. Mahmood,
|
| 3073 |
+
I. Z. Kovacs, and P. Mogensen, “On the multiplexing of broadband traffic
|
| 3074 |
+
and grant-free ultra-reliable communication in uplink,” in Proc. of IEEE
|
| 3075 |
+
Vehicular Technology Conference (VTC-Spring), Apr. 2019, pp. 1–6.
|
| 3076 |
+
[17] E. N. Tominaga, H. Alves, R. D. Souza, J. Luiz Rebelatto, and
|
| 3077 |
+
M. Latva-aho, “Non-orthogonal multiple access and network slicing:
|
| 3078 |
+
Scalable coexistence of eMBB and URLLC,” in Proc. of IEEE Vehicular
|
| 3079 |
+
Technology Conference (VTC-Spring), Apr. 2021, pp. 1–6.
|
| 3080 |
+
[18] F. Saggese, M. Moretti, and P. Popovski, “Power minimization of
|
| 3081 |
+
downlink spectrum slicing for eMBB and URLLC users,” IEEE Trans.
|
| 3082 |
+
Wireless Commun., vol. 21, no. 12, pp. 11 051–11 065, Dec. 2022.
|
| 3083 |
+
[19] M. Almekhlafi, M. A. Arfaoui, C. Assi, and A. Ghrayeb, “Joint resource
|
| 3084 |
+
and power allocation for URLLC-eMBB traffics multiplexing in 6G
|
| 3085 |
+
wireless networks,” in Proc. IEEE Int. Conf. on Commun. (ICC), Jun.
|
| 3086 |
+
2021, pp. 1–6.
|
| 3087 |
+
[20] J. Zeng, T. Lv, R. P. Liu, X. Su, Y. J. Guo, and N. C. Beaulieu, “Enabling
|
| 3088 |
+
ultrareliable and low-latency communications under shadow fading by
|
| 3089 |
+
massive MU-MIMO,” IEEE Internet Things J., vol. 7, no. 1, pp. 234–
|
| 3090 |
+
246, Jan. 2020.
|
| 3091 |
+
[21] H. Ren, C. Pan, Y. Deng, M. Elkashlan, and A. Nallanathan, “Joint pilot
|
| 3092 |
+
and payload power allocation for massive-MIMO-enabled URLLC IIoT
|
| 3093 |
+
networks,” IEEE J. Sel. Areas Commun., vol. 38, no. 5, pp. 816–830,
|
| 3094 |
+
May 2020.
|
| 3095 |
+
[22] A. A. Nasir, H. D. Tuan, H. Q. Ngo, T. Q. Duong, and H. V. Poor,
|
| 3096 |
+
“Cell-free massive MIMO in the short blocklength regime for URLLC,”
|
| 3097 |
+
IEEE Trans. Wireless Commun., vol. 20, no. 9, pp. 5861–5871, Sep.
|
| 3098 |
+
2021.
|
| 3099 |
+
[23] Y. Polyanskiy, H. V. Poor, and S. Verdu, “Channel coding rate in the
|
| 3100 |
+
finite blocklength regime,” IEEE Trans. Inf. Theory, vol. 56, no. 5, pp.
|
| 3101 |
+
2307–2359, May 2010.
|
| 3102 |
+
[24] J. Scarlett, A. Martinez, and A. Guill´en i F`abregas, “Mismatched
|
| 3103 |
+
decoding: Error exponents, second-order rates and saddlepoint approxi-
|
| 3104 |
+
mations,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2647–2666, May
|
| 3105 |
+
2014.
|
| 3106 |
+
[25] A. Martinez and A. Guill´en i F`abregas, “Saddlepoint approximation of
|
| 3107 |
+
random-coding bounds,” in Proc. of Inf. Theory and Applicat. Workshop
|
| 3108 |
+
(ITA), Feb. 2011, pp. 1–6.
|
| 3109 |
+
[26] W. Yang, G. Durisi, T. Koch, and Y. Polyanskiy, “Quasi-static multiple-
|
| 3110 |
+
antenna fading channels at finite blocklength,” IEEE Trans. Inf. Theory,
|
| 3111 |
+
vol. 60, no. 7, pp. 4232–4265, Jul. 2014.
|
| 3112 |
+
[27] G. Durisi, T. Koch, J. ¨Ostman, Y. Polyanskiy, and W. Yang, “Short-
|
| 3113 |
+
packet communications over multiple-antenna rayleigh-fading channels,”
|
| 3114 |
+
IEEE Trans. Commun., vol. 64, no. 2, pp. 618–629, Feb. 2016.
|
| 3115 |
+
[28] J. ¨Ostman, G. Durisi, E. G. Str¨om, M. C. Coc¸kun, and G. Liva,
|
| 3116 |
+
“Short packets over block-memoryless fading channels: Pilot-assisted
|
| 3117 |
+
|
| 3118 |
+
ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO
|
| 3119 |
+
19
|
| 3120 |
+
or noncoherent transmission?” IEEE Trans. Commun., vol. 67, no. 2,
|
| 3121 |
+
pp. 1521–1536, Feb. 2019.
|
| 3122 |
+
[29] D. Tse and P. Viswanath, Fundamentals of wireless communication.
|
| 3123 |
+
Cambridge University Press, 2005.
|
| 3124 |
+
[30] A. Lapidoth and S. Shamai, “Fading channels: how perfect need ”perfect
|
| 3125 |
+
side information” be?” IEEE Trans. Inf. Theory, vol. 48, no. 5, pp. 1118–
|
| 3126 |
+
1134, May 2002.
|
| 3127 |
+
[31] R. G. Gallager, Information Theory and Reliable Communication. New
|
| 3128 |
+
York, NY, U.S.A: John Wiley & Sons, 1968.
|
| 3129 |
+
[32] 3GPP, Further advancements for E-UTRA physical layer aspects (Re-
|
| 3130 |
+
lease 9).
|
| 3131 |
+
3GPP TS 36.814, Mar. 2017.
|
| 3132 |
+
[33] 3rd Generation Partnership Project 3GPP, Service requirements for
|
| 3133 |
+
cyber-physical control applications in vertical domains.
|
| 3134 |
+
3GPP TS
|
| 3135 |
+
22.104 v. 17.2.0, Dec. 2019.
|
| 3136 |
+
[34] S. Buzzi and C. D’Andrea, “User-centric communications versus cell-
|
| 3137 |
+
free massive MIMO for 5G cellular networks,” in WSA 2017; 21th
|
| 3138 |
+
International ITG Workshop on Smart Antennas.
|
| 3139 |
+
VDE, 2017, pp. 1–6.
|
| 3140 |
+
[35] G. Interdonato, E. Bj¨ornson, H. Q. Ngo, P. Frenger, and E. G. Larsson,
|
| 3141 |
+
“Ubiquitous cell-free massive MIMO communications,” EURASIP J.
|
| 3142 |
+
Wireless Commun. and Netw., vol. 2019, no. 1, p. 197, 2019.
|
| 3143 |
+
[36] S. Buzzi, C. D’Andrea, A. Zappone, and C. D’Elia, “User-centric 5G
|
| 3144 |
+
cellular networks: Resource allocation and comparison with the cell-
|
| 3145 |
+
free massive MIMO approach,” IEEE Trans. Wireless Commun., vol. 19,
|
| 3146 |
+
no. 2, pp. 1250–1264, Feb. 2020.
|
| 3147 |
+
|
GNE1T4oBgHgl3EQf_AZD/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
HdAzT4oBgHgl3EQfUvz7/content/tmp_files/2301.01274v1.pdf.txt
ADDED
|
@@ -0,0 +1,801 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Activity Detection for Grant-Free NOMA in Massive
|
| 2 |
+
IoT Networks
|
| 3 |
+
Mehrtash Mehrabi, Student Member, IEEE, Mostafa Mohammadkarimi, Member, IEEE,
|
| 4 |
+
and Masoud Ardakani, Senior Member, IEEE
|
| 5 |
+
Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, T6G 1H9, Canada
|
| 6 |
+
Email: {mehrtash, mostafa.mohammadkarimi, ardakani}@ualberta.ca
|
| 7 |
+
Abstract—Recently, grant-free transmission paradigm has been
|
| 8 |
+
introduced for massive Internet of Things (IoT) networks to save
|
| 9 |
+
both time and bandwidth and transmit the message with low
|
| 10 |
+
latency. In order to accurately decode the message of each device at
|
| 11 |
+
the base station (BS), first, the active devices at each transmission
|
| 12 |
+
frame must be identified. In this work, first we investigate the
|
| 13 |
+
problem of activity detection as a threshold comparing problem.
|
| 14 |
+
We show the convexity of the activity detection method through
|
| 15 |
+
analyzing its probability of error which makes it possible to
|
| 16 |
+
find the optimal threshold for minimizing the activity detection
|
| 17 |
+
error. Consequently, to achieve an optimum solution, we propose
|
| 18 |
+
a deep learning (DL)-based method called convolutional neural
|
| 19 |
+
network (CNN)-activity detection (AD). In order to make it more
|
| 20 |
+
practical, we consider unknown and time-varying activity rate
|
| 21 |
+
for the IoT devices. Our simulations verify that our proposed
|
| 22 |
+
CNN-AD method can achieve higher performance compared to the
|
| 23 |
+
existing non-Bayesian greedy-based methods. This is while existing
|
| 24 |
+
methods need to know the activity rate of IoT devices, while our
|
| 25 |
+
method works for unknown and even time-varying activity rates.
|
| 26 |
+
Index Terms—Activity detection, IoT, deep learning, NOMA,
|
| 27 |
+
massive MIMO.
|
| 28 |
+
I. INTRODUCTION
|
| 29 |
+
W
|
| 30 |
+
IRELESS technology recent advances provide massive
|
| 31 |
+
connectivity for machines and objects resulting in the
|
| 32 |
+
Internet of Things (IoT) [1]. The demand for the IoT is
|
| 33 |
+
expected to grow drastically in the near future with numerous
|
| 34 |
+
applications in health care systems, education, businesses and
|
| 35 |
+
governmental services [2]–[4].
|
| 36 |
+
As the demand for connectivity in IoT systems is growing
|
| 37 |
+
rapidly, it is crucial to improve the spectrum efficiency [5].
|
| 38 |
+
Hence, the non-orthogonal multiple access (NOMA) has been
|
| 39 |
+
introduced [6]. To address the main challenges of IoT, including
|
| 40 |
+
access collisions and massive connectivity, NOMA allows
|
| 41 |
+
devices to access the channel non-orthogonally by either power-
|
| 42 |
+
domain [7] or code-domain [8] multiplexing. Meanwhile, this
|
| 43 |
+
massive connectivity is highly affected by the conventional
|
| 44 |
+
grant-based NOMA transmission scheme, where the exchange
|
| 45 |
+
of control signaling between the base station (BS) and IoT
|
| 46 |
+
devices is needed for channel access. The excessive signaling
|
| 47 |
+
overhead causes spectral deficiency and large transmission
|
| 48 |
+
latency. Grant-free NOMA has been introduced to make a
|
| 49 |
+
flexible transmission mechanism for the devices and save time
|
| 50 |
+
and bandwidth by removing the need for the exchange of
|
| 51 |
+
control signaling between the BS and devices. Hence, devices
|
| 52 |
+
can transmit data randomly at any time slot without any request-
|
| 53 |
+
grant procedure.
|
| 54 |
+
In many IoT applications, a few devices become active for a
|
| 55 |
+
short period of time to communicate with the BS while others
|
| 56 |
+
are inactive [9]. In IoT networks with a large number of nodes
|
| 57 |
+
each with a small probability of activity, multiuser detection
|
| 58 |
+
(MUD) methods heavily rely on activity detection (AD) prior to
|
| 59 |
+
detection and decoding [4], [10]–[13]. For uplink transmission
|
| 60 |
+
in IoT systems with grant-free NOMA transmission scheme,
|
| 61 |
+
where the performance of MUD can be severely affected by
|
| 62 |
+
the multi-access interference, the reliable detection of both
|
| 63 |
+
activity and transmitted signal is very challenging and can be
|
| 64 |
+
computationally expensive [10], [12].
|
| 65 |
+
There have been many studies in the literature suggesting
|
| 66 |
+
compressive sensing (CS) methods for joint activity and data
|
| 67 |
+
detection [12]–[16]. Although CS methods can achieve a re-
|
| 68 |
+
liable MUD, they only work in networks with sporadic traffic
|
| 69 |
+
pattern, and are expensive in terms of computational complexity
|
| 70 |
+
[12]. Recently, deep learning (DL) models have observed a lot
|
| 71 |
+
of interests in communication systems and more specifically
|
| 72 |
+
in signal detection [17]–[19]. A study in [19] suggests to use
|
| 73 |
+
DL for activity and data detection, however they consider a
|
| 74 |
+
deterministic traffic pattern for the activity which is not valid
|
| 75 |
+
in all environments.
|
| 76 |
+
In this work, we first formulate the problem of IoT activity
|
| 77 |
+
detection as a threshold comparing problem. We then analyze
|
| 78 |
+
the probability of error of this activity detection method.
|
| 79 |
+
Observing that this probability of error is a convex function
|
| 80 |
+
of the decision threshold, we raise the question of finding the
|
| 81 |
+
optimal threshold for minimizing the activity detection error. To
|
| 82 |
+
achieve this goal, we propose a convolutional neural network
|
| 83 |
+
(CNN)-based AD algorithm for grant-fee code-domain uplink
|
| 84 |
+
NOMA. Unlike existing CS-based AD algorithms, our solution
|
| 85 |
+
does not need to know the exact number of active devices or
|
| 86 |
+
even the activity rate of IoT devices. In fact, in our system
|
| 87 |
+
model we assume a time-varying and unknown activity rate and
|
| 88 |
+
a heterogeneous network. Simulation results verify the success
|
| 89 |
+
the proposed algorithm.
|
| 90 |
+
The rest of this paper is organized as follows. We present
|
| 91 |
+
the system model in Section II. In Section III we formulate the
|
| 92 |
+
device AD problem and derive its probability of error. Section
|
| 93 |
+
IV introduces our CNN-based solution for device AD. The
|
| 94 |
+
simulation results are presented in Section V. Finally, the paper
|
| 95 |
+
is concluded in Section VI.
|
| 96 |
+
arXiv:2301.01274v1 [eess.SP] 23 Dec 2022
|
| 97 |
+
|
| 98 |
+
f
|
| 99 |
+
N
|
| 100 |
+
1
|
| 101 |
+
2
|
| 102 |
+
tT
|
| 103 |
+
|
| 104 |
+
2
|
| 105 |
+
1
|
| 106 |
+
s
|
| 107 |
+
N
|
| 108 |
+
s
|
| 109 |
+
T
|
| 110 |
+
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
i-th Transmission Frame
|
| 114 |
+
(i+1)-th Transmission Frame
|
| 115 |
+
Channel
|
| 116 |
+
Estimation
|
| 117 |
+
Channel
|
| 118 |
+
Estimation
|
| 119 |
+
Fig. 1: CDMA slotted ALOHA transmission frame
|
| 120 |
+
A. Notations
|
| 121 |
+
Throughout this paper, (·)∗ represents the complex conjugate.
|
| 122 |
+
Matrix transpose and Hermitian operators are shown by (·)T
|
| 123 |
+
and (·)H, respectively. The operator diag(b) returns a square
|
| 124 |
+
diagonal matrix with the elements of vector b on the main
|
| 125 |
+
diagonal. Furthermore, E[·] is the statistical expectation, ˆa
|
| 126 |
+
denotes an estimated value for a, and size of set S is shown
|
| 127 |
+
by |S|. The constellation and m-dimensional complex spaces
|
| 128 |
+
are denoted by D and Cm, respectively. Finally, the circularly
|
| 129 |
+
symmetric complex Gaussian distribution with mean vector µ
|
| 130 |
+
and covariance matrix Σ is denoted by CN(µ, Σ).
|
| 131 |
+
II. SYSTEM MODEL
|
| 132 |
+
We consider a code-division multiple access (CDMA) uplink
|
| 133 |
+
transmission, where K IoT devices communicate with a single
|
| 134 |
+
IoT BS equipped with M receive antennas. This commonly
|
| 135 |
+
used model [3], [6], [19], also considers a frame structure for
|
| 136 |
+
uplink transmission composed of a channel estimation phase
|
| 137 |
+
followed by CDMA slotted ALOHA data transmissions as
|
| 138 |
+
shown in Fig. 1. In each frame, let Nf short packets of length
|
| 139 |
+
Tt = NsTs, where Ns is the number of symbols per IoT packet
|
| 140 |
+
and Ts is the symbol duration. It is assumed that the channel
|
| 141 |
+
is fixed during each frame, but it varies from one frame to
|
| 142 |
+
another. The channel state information (CSI) is acquired at the
|
| 143 |
+
BS during the channel estimation phase. As it is common in
|
| 144 |
+
massive machine-type communications (mMTC), we assume
|
| 145 |
+
that the IoT devices are only active on occasion and transmit
|
| 146 |
+
short data packets during each frame. The activity rate of the
|
| 147 |
+
IoT devices is denoted by Pa ∈ [0, Pmax], which is assumed
|
| 148 |
+
to be unknown and time-varying from one packet transmission
|
| 149 |
+
to another. Let bk ∈ A be the transmitted symbol of the k-
|
| 150 |
+
th device and chosen from a finite alphabet A, when the k-th
|
| 151 |
+
device is active; otherwise, bk = 0. Consequently, bk can take
|
| 152 |
+
values from an augmented alphabet ¯
|
| 153 |
+
A = A ∪ {0}. We also
|
| 154 |
+
denote the set of all devices and the set of active devices by
|
| 155 |
+
St = {1, 2, . . . , K} and Sa, respectively, where Sa ⊂ St.1
|
| 156 |
+
A unique spreading code is dedicated to each IoT device
|
| 157 |
+
which is simultaneously used for the spreading purpose and de-
|
| 158 |
+
vice identification. This removes the need for control signaling
|
| 159 |
+
associated with IoT device identification. Control signals are
|
| 160 |
+
inefficient for short packet mMTC. The spreading sequence for
|
| 161 |
+
the k-th IoT device is denoted by ck = [c1,k c2,k · · · cNc,k]T
|
| 162 |
+
where ci,k ∈ {−1, +1} and Nc is the spreading factor. To
|
| 163 |
+
1For the simplicity of notation, we remove the index of frame and packet.
|
| 164 |
+
support a large number of devices, non-orthogonal spreading
|
| 165 |
+
sequences are employed; resulting in NOMA transmission.
|
| 166 |
+
For a single frame, the complex channel coefficient between
|
| 167 |
+
the k-th IoT device and the m-th receive antenna at the BS is
|
| 168 |
+
denoted as gm,k. The active IoT device k, k ∈ Sa transmits Ns
|
| 169 |
+
symbols denoted by bk = [bk,1, · · · , bk,Ns]T during a packet.
|
| 170 |
+
The received baseband signal over Rayleigh flat fading channel
|
| 171 |
+
in a single slot of the slotted ALOHA frame at the m-th receive
|
| 172 |
+
antenna of the BS is expressed as
|
| 173 |
+
Ym =
|
| 174 |
+
K
|
| 175 |
+
�
|
| 176 |
+
k=1
|
| 177 |
+
gm,kckbT
|
| 178 |
+
k + Wm,
|
| 179 |
+
(1)
|
| 180 |
+
where Wm
|
| 181 |
+
∈
|
| 182 |
+
CNc×Ns
|
| 183 |
+
with wi,j
|
| 184 |
+
∼
|
| 185 |
+
CN(0, σ2
|
| 186 |
+
w) and
|
| 187 |
+
E[wi,jw∗
|
| 188 |
+
u,v]
|
| 189 |
+
=
|
| 190 |
+
σ2
|
| 191 |
+
wδ[i − u]δ[j − v] is the additive white
|
| 192 |
+
Gaussian noise (AWGN) matrix at the m-th receive an-
|
| 193 |
+
tenna. The equivalent channel matrix between all IoT devices
|
| 194 |
+
and the m-th receive antenna can be expressed as Φm =
|
| 195 |
+
[gm,1c1, · · · , gm,KcK] ∈ CNc×K. Thus, the received packet
|
| 196 |
+
at the m-th (m = 1, 2, · · · , M) receive antenna is given by
|
| 197 |
+
Ym = ΦmB + Wm,
|
| 198 |
+
(2)
|
| 199 |
+
where B = [b1, · · · , bK]T ∈ DK×Ns.
|
| 200 |
+
Let the total set of all IoT devices be decomposed into a
|
| 201 |
+
finite number of disjoint groups G1, G2, · · · , GS. Within group
|
| 202 |
+
Gj, the power of every IoT device is given by Pj. The
|
| 203 |
+
powers of the devices are equal within each group, but differ
|
| 204 |
+
from group to group. The fraction of devices in group Gj is
|
| 205 |
+
therefore |Gj|/K. It is assumed that Pj is known at the BS.
|
| 206 |
+
This configuration captures heterogeneous IoT networks, where
|
| 207 |
+
groups of IoT devices capture different phenomenon in a given
|
| 208 |
+
geographical area. A single group of IoT devices with equal
|
| 209 |
+
power transmission, resulting in a homogeneous network, is
|
| 210 |
+
also studied in this paper.
|
| 211 |
+
III. PROBLEM FORMULATION
|
| 212 |
+
In this section, we present the problem of IoT device AD in
|
| 213 |
+
the cases of known CSI at the receiver and in the presence of
|
| 214 |
+
sparse or non-sparse transmission. In order to detect the active
|
| 215 |
+
devices, it is assumed that the BS is equipped with a match filter
|
| 216 |
+
and the precoding matrix and CSI Φm is available. Before AD,
|
| 217 |
+
the observation matrix at the m-th receive antenna ym is passed
|
| 218 |
+
through the decorrelator to obtain
|
| 219 |
+
Ym = ΦH
|
| 220 |
+
mYm ∈ CK×Ns.
|
| 221 |
+
(3)
|
| 222 |
+
In the following, we investigate the details of the AD problem
|
| 223 |
+
based on the Gaussian detection to show how a threshold can be
|
| 224 |
+
computed to distinguish active IoT devices from inactive ones.
|
| 225 |
+
The output of the decorrelator receiver for the m-th receive
|
| 226 |
+
antenna is expressed as
|
| 227 |
+
Ym = ΦH
|
| 228 |
+
mΦmB + ΦH
|
| 229 |
+
mWm,
|
| 230 |
+
=
|
| 231 |
+
�
|
| 232 |
+
�����
|
| 233 |
+
�K
|
| 234 |
+
k=1 g∗
|
| 235 |
+
m,1gm,kcT
|
| 236 |
+
1 ckbT
|
| 237 |
+
k + g∗
|
| 238 |
+
m,1cT
|
| 239 |
+
1 Wm
|
| 240 |
+
�K
|
| 241 |
+
k=1 g∗
|
| 242 |
+
m,2gm,kcT
|
| 243 |
+
2 ckbT
|
| 244 |
+
k + g∗
|
| 245 |
+
m,2cT
|
| 246 |
+
2 Wm
|
| 247 |
+
...
|
| 248 |
+
�K
|
| 249 |
+
k=1 g∗
|
| 250 |
+
m,Kgm,kcT
|
| 251 |
+
KckbT
|
| 252 |
+
k + g∗
|
| 253 |
+
m,KcT
|
| 254 |
+
KWm
|
| 255 |
+
�
|
| 256 |
+
�����
|
| 257 |
+
.
|
| 258 |
+
(4)
|
| 259 |
+
|
| 260 |
+
Consequently, the received signal from the k-th user at the m-th
|
| 261 |
+
receive antenna is
|
| 262 |
+
rm
|
| 263 |
+
k = ||gm,kck||2
|
| 264 |
+
2bT
|
| 265 |
+
k +
|
| 266 |
+
K
|
| 267 |
+
�
|
| 268 |
+
i=1(i̸=k)
|
| 269 |
+
g∗
|
| 270 |
+
m,kgm,icT
|
| 271 |
+
k cibT
|
| 272 |
+
i +g∗
|
| 273 |
+
m,kcT
|
| 274 |
+
k Wm,
|
| 275 |
+
(5)
|
| 276 |
+
where the second and third terms are multi user interference and
|
| 277 |
+
additive noise, respectively. Since an IoT device is either active
|
| 278 |
+
or inactive for the entire packet transmission, we determine the
|
| 279 |
+
activity status of a device based on each received symbol and
|
| 280 |
+
then use the results in [20] for spectrum sensing and combine
|
| 281 |
+
the obtained results from all Ns symbols. The device AD in
|
| 282 |
+
the case of single symbol transmission is studied in [12], and
|
| 283 |
+
we follow that to determine the status of each device based on
|
| 284 |
+
each received symbol and then combine the results. The j-th
|
| 285 |
+
received symbol from device k at receive antenna m, denoted
|
| 286 |
+
as rm
|
| 287 |
+
k,j, is
|
| 288 |
+
rm
|
| 289 |
+
k,j =||gm,kck||2
|
| 290 |
+
2bk,j+
|
| 291 |
+
K
|
| 292 |
+
�
|
| 293 |
+
i=1(i̸=k)
|
| 294 |
+
g∗
|
| 295 |
+
m,kgm,icT
|
| 296 |
+
k cibi,j + g∗
|
| 297 |
+
m,kcT
|
| 298 |
+
k wj,
|
| 299 |
+
(6)
|
| 300 |
+
where the first term is the main signal, the second term is multi
|
| 301 |
+
user interference from other devices, and the third term is the
|
| 302 |
+
additive noise. For the sake of simplicity we assume that BPSK
|
| 303 |
+
modulation is used, i.e., the transmitted symbols are drawn from
|
| 304 |
+
A = {−1, +1} and p(bk,j = +1) = p(bk,j = −1) = 1/2. The
|
| 305 |
+
multi user interference plus noise in rm
|
| 306 |
+
k,j has variance
|
| 307 |
+
σ2
|
| 308 |
+
k,j = var
|
| 309 |
+
�
|
| 310 |
+
K
|
| 311 |
+
�
|
| 312 |
+
i=1(i̸=k)
|
| 313 |
+
g∗
|
| 314 |
+
m,kgm,icT
|
| 315 |
+
k cibi,j + g∗
|
| 316 |
+
m,kcT
|
| 317 |
+
k wj
|
| 318 |
+
�
|
| 319 |
+
=
|
| 320 |
+
K
|
| 321 |
+
�
|
| 322 |
+
i=1(i̸=k)
|
| 323 |
+
|g∗
|
| 324 |
+
m,kgm,icT
|
| 325 |
+
k ci|2Pa + ||g∗
|
| 326 |
+
m,kcT
|
| 327 |
+
k ||2
|
| 328 |
+
2.
|
| 329 |
+
(7)
|
| 330 |
+
Now we can approximate rm
|
| 331 |
+
k,j by a Gaussian distribution
|
| 332 |
+
as N(||gm,kck||2
|
| 333 |
+
2, σ2
|
| 334 |
+
k,j) [20]. In order to identify the activity of
|
| 335 |
+
device k, our goal is to propose an algorithm to define threshold
|
| 336 |
+
τ and set device k as active if |rm
|
| 337 |
+
k,j| > τ. Then the probability
|
| 338 |
+
of error, Pe, is computed as
|
| 339 |
+
P k,j
|
| 340 |
+
e
|
| 341 |
+
=Pap(|rm
|
| 342 |
+
k,j| < τ|bk,j ̸= 0)
|
| 343 |
+
+ 2(1 − Pa)p(|rm
|
| 344 |
+
k,j| > τ|bk,j = 0),
|
| 345 |
+
(8)
|
| 346 |
+
where we have p(rm
|
| 347 |
+
k,j|bk,j ̸= 0) ∼ N(||gm,kck||2
|
| 348 |
+
2, σ2
|
| 349 |
+
k,j) and
|
| 350 |
+
p(rm
|
| 351 |
+
k,j|bk,j = 0) ∼ N(0, σ2
|
| 352 |
+
k,j). We can rewrite (8) as
|
| 353 |
+
P k,j
|
| 354 |
+
e
|
| 355 |
+
= 2(1 − Pa)Q( τ
|
| 356 |
+
σk,j
|
| 357 |
+
) + PaQ(||gm,kck||2
|
| 358 |
+
2 − τ
|
| 359 |
+
σk,j
|
| 360 |
+
),
|
| 361 |
+
(9)
|
| 362 |
+
where Q(x) = (1/
|
| 363 |
+
√
|
| 364 |
+
2π)
|
| 365 |
+
� ∞
|
| 366 |
+
x exp(−t2/2)dt denotes the Gaus-
|
| 367 |
+
sian tail function. The probability of error in (9) is a convex
|
| 368 |
+
function of τ and hence, a fine tuned neural network is capable
|
| 369 |
+
of solving this problem and detect the active devices by finding
|
| 370 |
+
the optimum τ. In the following section, we define our DL-
|
| 371 |
+
based algorithm to find the optimum τ and minimize the
|
| 372 |
+
probability of error.
|
| 373 |
+
IV. DL-BASED AD
|
| 374 |
+
Device AD is the first step toward effective MUD in a grant-
|
| 375 |
+
free uplink multiple access. The recent studies on AD suggest to
|
| 376 |
+
use CS methods to identify the set of active devices [14], [15].
|
| 377 |
+
However, these methods fail in the practical scenarios, where
|
| 378 |
+
the activity rate is time-varying and/or unknown. Moreover,
|
| 379 |
+
these methods are mainly effective for low device activity rate
|
| 380 |
+
scenarios, i.e., when sparsity level is high [14]. In this section,
|
| 381 |
+
we propose our AD algorithms called CNN-AD by employing a
|
| 382 |
+
CNN for heterogeneous IoT networks. By employing a suitably
|
| 383 |
+
designed CNN, the underlying pattern in device activity can be
|
| 384 |
+
easily learnt.
|
| 385 |
+
A. CNN-AD Algorithm
|
| 386 |
+
Fig. 2 illustrates the structure of the proposed CNN-AD algo-
|
| 387 |
+
rithm. As seen, it is composed of there blocks: 1) preprocessing,
|
| 388 |
+
2) CNN processing, and 3) hypothesis testing.
|
| 389 |
+
In the preprocessing step after sequence matched filtering, we
|
| 390 |
+
first sort the observation matrix from all M receive antennas
|
| 391 |
+
in a 3D Tensor as
|
| 392 |
+
R =
|
| 393 |
+
�
|
| 394 |
+
����
|
| 395 |
+
�
|
| 396 |
+
P ¯Y1
|
| 397 |
+
�
|
| 398 |
+
�
|
| 399 |
+
P ¯Y2
|
| 400 |
+
�
|
| 401 |
+
...
|
| 402 |
+
�
|
| 403 |
+
P ¯YM
|
| 404 |
+
�
|
| 405 |
+
�
|
| 406 |
+
����
|
| 407 |
+
(10)
|
| 408 |
+
where PYm ∈ CK×Ns, Ym = ΦH
|
| 409 |
+
mYm ∈ CK×Ns for
|
| 410 |
+
m
|
| 411 |
+
=
|
| 412 |
+
1, 2, · · · , M, and P
|
| 413 |
+
≜
|
| 414 |
+
diag(p1, · · · , pK), pk
|
| 415 |
+
∈
|
| 416 |
+
{1/P1, · · · , 1/PS} for k = 1, 2, · · · , K.
|
| 417 |
+
In the CNN processing block, the 3D Tensor R is used
|
| 418 |
+
as the input of a suitable designed CNN. The CNN models
|
| 419 |
+
benefit from the convolutional layers performing convolution
|
| 420 |
+
operations between matrices instead of multiplication. Thus, it
|
| 421 |
+
leads to dimension reduction for feature extraction and provides
|
| 422 |
+
a new input to the next network layers which includes only
|
| 423 |
+
the useful features of the original high-dimensional input. The
|
| 424 |
+
IoT device AD can be formulated as a binary classification or
|
| 425 |
+
regression problem. Formulating device AD as a classification
|
| 426 |
+
problem is straightforward, but it requires the accurate design
|
| 427 |
+
of the CNN’s structure and parameters.
|
| 428 |
+
In the hypothesis testing block, the K outputs of the CNN’s
|
| 429 |
+
Sigmoid layer is compared with a predefined threshold to
|
| 430 |
+
determine the activity status of the IoT devices in the network.
|
| 431 |
+
If the k-th node of the Sigmoid layer exceeds the threshold,
|
| 432 |
+
the k-th IoT device is identified as active.
|
| 433 |
+
B. Training Phase
|
| 434 |
+
In order to train the designed CNN, we define the activity
|
| 435 |
+
vector a as
|
| 436 |
+
a = [a1 a2
|
| 437 |
+
· · ·
|
| 438 |
+
aK]T ,
|
| 439 |
+
(11)
|
| 440 |
+
where ak is 1 when the k-th IoT device is active and 0
|
| 441 |
+
otherwise. We train our model with N independent training
|
| 442 |
+
samples (R
|
| 443 |
+
(j),a(j)), where j = 1, 2, · · · , N and a(j) and
|
| 444 |
+
R
|
| 445 |
+
(j) are the activity vector and observation matrix of the
|
| 446 |
+
j-th training sample, respectively. Our objective is to train
|
| 447 |
+
the designed CNN to generate the desired output vector a(j)
|
| 448 |
+
|
| 449 |
+
Preprocessing
|
| 450 |
+
]
|
| 451 |
+
,
|
| 452 |
+
,
|
| 453 |
+
,
|
| 454 |
+
[
|
| 455 |
+
2
|
| 456 |
+
1
|
| 457 |
+
M
|
| 458 |
+
Y
|
| 459 |
+
Y
|
| 460 |
+
Y
|
| 461 |
+
|
| 462 |
+
Received Message
|
| 463 |
+
at M Antennas
|
| 464 |
+
|
| 465 |
+
|
| 466 |
+
|
| 467 |
+
|
| 468 |
+
|
| 469 |
+
|
| 470 |
+
|
| 471 |
+
|
| 472 |
+
|
| 473 |
+
|
| 474 |
+
|
| 475 |
+
|
| 476 |
+
|
| 477 |
+
]
|
| 478 |
+
[
|
| 479 |
+
]
|
| 480 |
+
[
|
| 481 |
+
]
|
| 482 |
+
[
|
| 483 |
+
2
|
| 484 |
+
1
|
| 485 |
+
M
|
| 486 |
+
Y
|
| 487 |
+
P
|
| 488 |
+
Y
|
| 489 |
+
P
|
| 490 |
+
Y
|
| 491 |
+
P
|
| 492 |
+
R
|
| 493 |
+
|
| 494 |
+
CNN
|
| 495 |
+
Input
|
| 496 |
+
M
|
| 497 |
+
K
|
| 498 |
+
s
|
| 499 |
+
N
|
| 500 |
+
CONV 3*3,
|
| 501 |
+
stride=3,
|
| 502 |
+
pad=same
|
| 503 |
+
128 kernels
|
| 504 |
+
128
|
| 505 |
+
MAX_POOL,
|
| 506 |
+
2*2,
|
| 507 |
+
stride=2,
|
| 508 |
+
2
|
| 509 |
+
M
|
| 510 |
+
2
|
| 511 |
+
K
|
| 512 |
+
FC
|
| 513 |
+
|
| 514 |
+
1024
|
| 515 |
+
ReLU
|
| 516 |
+
FC
|
| 517 |
+
|
| 518 |
+
K
|
| 519 |
+
Sigmoid
|
| 520 |
+
Hypothesis Testing
|
| 521 |
+
|
| 522 |
+
?
|
| 523 |
+
5.0
|
| 524 |
+
|
| 525 |
+
a
|
| 526 |
+
S
|
| 527 |
+
128
|
| 528 |
+
M
|
| 529 |
+
K
|
| 530 |
+
Fig. 2: Model structure of the proposed CNN-AD algorithm
|
| 531 |
+
for input matrix R
|
| 532 |
+
(j). The model tries to learns non-linear
|
| 533 |
+
transformation Ψ such that
|
| 534 |
+
ˆa(j) = Ψ(R
|
| 535 |
+
(j); Θ),
|
| 536 |
+
(12)
|
| 537 |
+
where Θ is the set of parameters learned during the training
|
| 538 |
+
by minimizing the loss function. The output of model, i.e.
|
| 539 |
+
ˆa determines the activity probabilities of the IoT devices.
|
| 540 |
+
Here since there are two classes (active or inactive) for each
|
| 541 |
+
IoT device, the loss function is chosen as the binary cross-
|
| 542 |
+
entropy. For each training sample j, the binary cross-entropy
|
| 543 |
+
loss function compares the probability that the IoT devices are
|
| 544 |
+
active (ˆa(j)) with the true activity vector a(j) as
|
| 545 |
+
Loss(Θ) = 1
|
| 546 |
+
N
|
| 547 |
+
N
|
| 548 |
+
�
|
| 549 |
+
j=1
|
| 550 |
+
−
|
| 551 |
+
�
|
| 552 |
+
a(j) log(ˆa(j))+(1−a(j)) log(1−ˆa(j))
|
| 553 |
+
�
|
| 554 |
+
,
|
| 555 |
+
(13)
|
| 556 |
+
where log(·) performs an element-wise log operation on ˆa(j),
|
| 557 |
+
and the vector multiplication is also element-wise.
|
| 558 |
+
V. EXPERIMENTS
|
| 559 |
+
In this section, we evaluate the performance of the proposed
|
| 560 |
+
CNN-AD algorithm through various simulation experiments
|
| 561 |
+
and compare it with some of the existing methods.
|
| 562 |
+
A. Simulation Setup
|
| 563 |
+
We consider an IoT network with K devices where K > Nc
|
| 564 |
+
and pseudo-random codes are used as the spreading sequences
|
| 565 |
+
for IoT devices. The probability of activity Pa is considered
|
| 566 |
+
to be unknown and time-varying from one packet to another
|
| 567 |
+
in the range of Pa ∈ [0, Pmax], where Pmax = 0.1. The
|
| 568 |
+
BPSK modulation is used for uplink transmission. Without
|
| 569 |
+
loss of generality, the channel coefficient between IoT devices
|
| 570 |
+
and the BS is modeled as independent zero-mean complex
|
| 571 |
+
Gaussian random variables with variance σ2
|
| 572 |
+
k,m = 1, k ∈ St
|
| 573 |
+
and m ∈ {1, · · · , M}. The additive white noise is modeled as
|
| 574 |
+
zero-mean complex Gaussian random variables with variance
|
| 575 |
+
σ2
|
| 576 |
+
w, and the signal-to-noise ratio (SNR) in dB is defined as
|
| 577 |
+
γ ≜ 10 log(σ2
|
| 578 |
+
s /σ2
|
| 579 |
+
w), where σ2
|
| 580 |
+
s = PaPt is the average transmit
|
| 581 |
+
power with Pt = �K
|
| 582 |
+
k=1 pk as the total transmission power.
|
| 583 |
+
Unless otherwise mentioned, we consider spreading sequences
|
| 584 |
+
with spreading factor Nc = 32.
|
| 585 |
+
In order to train CNN-AD, we generate 105 independent
|
| 586 |
+
samples and use 80% for training and the rest for validation
|
| 587 |
+
and test. Adam optimizer [21] with learning rate of 10−3 is
|
| 588 |
+
used to minimize cross-entropy loss function in (13).
|
| 589 |
+
0
|
| 590 |
+
2
|
| 591 |
+
4
|
| 592 |
+
6
|
| 593 |
+
8
|
| 594 |
+
10
|
| 595 |
+
12
|
| 596 |
+
14
|
| 597 |
+
16
|
| 598 |
+
18
|
| 599 |
+
20
|
| 600 |
+
SNR
|
| 601 |
+
10
|
| 602 |
+
3
|
| 603 |
+
10
|
| 604 |
+
2
|
| 605 |
+
10
|
| 606 |
+
1
|
| 607 |
+
AER
|
| 608 |
+
OMP (Uniform Power)
|
| 609 |
+
AMP (Uniform Power)
|
| 610 |
+
CNN_Based (Uniform Power)
|
| 611 |
+
OMP (non-Uniform Power)
|
| 612 |
+
AMP (non-Uniform Power)
|
| 613 |
+
CNN_Based (non-Uniform Power)
|
| 614 |
+
Fig. 3: Achieved BER with MMSE with a priory AD of OMP, AMP,
|
| 615 |
+
and CNN-AD without knowing the number of active devices.
|
| 616 |
+
0.1
|
| 617 |
+
0.2
|
| 618 |
+
0.3
|
| 619 |
+
0.4
|
| 620 |
+
0.5
|
| 621 |
+
0.6
|
| 622 |
+
0.7
|
| 623 |
+
0.8
|
| 624 |
+
Activity Rate
|
| 625 |
+
10
|
| 626 |
+
2
|
| 627 |
+
10
|
| 628 |
+
1
|
| 629 |
+
BER
|
| 630 |
+
OMP (Uniform Power)
|
| 631 |
+
AMP (Uniform Power)
|
| 632 |
+
CNN-AD (Uniform Power)
|
| 633 |
+
Fig. 4: Impact of Pa on the performance of different methods as the
|
| 634 |
+
priory AD for MMSE in terms of achieved BER.
|
| 635 |
+
B. Simulation Results
|
| 636 |
+
1) Performance Evaluation of CNN-AD: We assess CNN-
|
| 637 |
+
AD through various simulations and compare it with the exist-
|
| 638 |
+
ing CS-based methods including orthogonal matching pursuit
|
| 639 |
+
(OMP) [22] and approximate message passing (AMP) [23].
|
| 640 |
+
The impact of SNR on the activity error rate (AER) achieved
|
| 641 |
+
by different AD algorithms in both homogeneous and hetero-
|
| 642 |
+
geneous IoT networks with uniform and non-uniform power
|
| 643 |
+
allocation is shown in Fig. 3. The AER of different methods
|
| 644 |
+
are compared for a wide range of SNRs in an IoT system with
|
| 645 |
+
total K = 40 IoT devices and a single BS with M = 100
|
| 646 |
+
receive antennas. As expected, the AER of all AD algorithms
|
| 647 |
+
decreases with increasing SNR. However, CNN-AD achieves
|
| 648 |
+
|
| 649 |
+
IoT Device
|
| 650 |
+
Model
|
| 651 |
+
Precision
|
| 652 |
+
Recall
|
| 653 |
+
F1-score
|
| 654 |
+
OMP
|
| 655 |
+
28%
|
| 656 |
+
32%
|
| 657 |
+
30%
|
| 658 |
+
Device A
|
| 659 |
+
AMP
|
| 660 |
+
31%
|
| 661 |
+
35%
|
| 662 |
+
33%
|
| 663 |
+
CNN-AD
|
| 664 |
+
73%
|
| 665 |
+
92%
|
| 666 |
+
81%
|
| 667 |
+
OMP
|
| 668 |
+
33%
|
| 669 |
+
32%
|
| 670 |
+
32%
|
| 671 |
+
Device B
|
| 672 |
+
AMP
|
| 673 |
+
38%
|
| 674 |
+
35%
|
| 675 |
+
36%
|
| 676 |
+
CNN-AD
|
| 677 |
+
100%
|
| 678 |
+
83%
|
| 679 |
+
91%
|
| 680 |
+
TABLE I: Performance analysis different algorithms for two typical
|
| 681 |
+
IoT devices for Pmax = 0.1 at γ = 10 dB.
|
| 682 |
+
the best performance since unlike the non-Bayesian greedy
|
| 683 |
+
algorithms OMP and AMP, our method relies on the statistical
|
| 684 |
+
distributions of device activities and channels and exploit them
|
| 685 |
+
in the training process.
|
| 686 |
+
Fig. 4 illustrates the effect of activity rate on the bit error
|
| 687 |
+
rate (BER) for minimum mean square error (MMSE)-MUD
|
| 688 |
+
with different AD algorithms at γ = 10 dB SNR. As seen,
|
| 689 |
+
as the activity rate increases, the number of active devices
|
| 690 |
+
also increases accordingly and thus it becomes difficult to
|
| 691 |
+
detect all the active devices. This results in a higher BER. We
|
| 692 |
+
use Pmax = 0.1 to train CNN-AD. Thus, the MMSE-MUD
|
| 693 |
+
with CNN-AD shows performance degradation for the activity
|
| 694 |
+
rates of larger than Pmax = 0.1. However, it still outperforms
|
| 695 |
+
the performance of the MMSE-MUD with OMP and AMP
|
| 696 |
+
AD algorithms. It should be mentioned that this performance
|
| 697 |
+
improves when CNN-AD is trained for a larger value of Pmax.
|
| 698 |
+
We further investigate the AD algorithms in terms of other
|
| 699 |
+
metrics for two typical IoT devices for Pmax = 0.1 at γ = 10
|
| 700 |
+
dB SNR, presented in Table I. In this table we compare the
|
| 701 |
+
precision, recall, and F1-score, defined in [24], achieved by
|
| 702 |
+
CNN-AD with OMP and AMP AD algorithms. As seen, all
|
| 703 |
+
metrics are improved by using CNN-AD.
|
| 704 |
+
VI. CONCLUSIONS
|
| 705 |
+
In this paper, we consider the problem of AD in IoT networks
|
| 706 |
+
in grant-free NOMA systems. Based on the application, IoT
|
| 707 |
+
devices can be inactive for a long period of time and only active
|
| 708 |
+
in the time of transmission to the BS. Hence, identifying the
|
| 709 |
+
active devices is required for an accurate data detection. Some
|
| 710 |
+
studies propose CS-based method for AD. However, high level
|
| 711 |
+
of message sparsity is necessary for those methods. In order
|
| 712 |
+
to remove this need and exploit the statistical properties of the
|
| 713 |
+
channels we propose a CNN-based method called CNN-AD to
|
| 714 |
+
detect active IoT devices. Comparison with available methods
|
| 715 |
+
shows the strength of our algorithm.
|
| 716 |
+
ACKNOWLEDGMENT
|
| 717 |
+
The study presented in this paper is supported by Alberta
|
| 718 |
+
Innovates and Natural Sciences and Engineering Research
|
| 719 |
+
Council of Canada (NSERC).
|
| 720 |
+
REFERENCES
|
| 721 |
+
[1] G. Durisi, T. Koch, and P. Popovski, “Toward massive, ultrareliable, and
|
| 722 |
+
low-latency wireless communication with short packets,” Proceedings of
|
| 723 |
+
the IEEE, vol. 104, no. 9, pp. 1711–1726, 2016.
|
| 724 |
+
[2] L. D. Xu, W. He, and S. Li, “Internet of things in industries: A survey,”
|
| 725 |
+
IEEE Transactions on Industrial Informatics, vol. 10, no. 4, pp. 2233–
|
| 726 |
+
2243, 2014.
|
| 727 |
+
[3] A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash,
|
| 728 |
+
“Internet of Things: A survey on enabling technologies, protocols, and
|
| 729 |
+
applications,” IEEE Communications Surveys Tutorials, vol. 17, no. 4,
|
| 730 |
+
pp. 2347–2376, 2015.
|
| 731 |
+
[4] C. Bockelmann, N. Pratas, H. Nikopour, K. Au, T. Svensson, C. Ste-
|
| 732 |
+
fanovic, P. Popovski, and A. Dekorsy, “Massive machine-type communi-
|
| 733 |
+
cations in 5G: Physical and MAC-layer solutions,” IEEE Communications
|
| 734 |
+
Magazine, vol. 54, no. 9, pp. 59–65, 2016.
|
| 735 |
+
[5] W. Ejaz and M. Ibnkahla, “Multiband spectrum sensing and resource
|
| 736 |
+
allocation for IoT in cognitive 5G networks,” IEEE Internet of Things
|
| 737 |
+
Journal, vol. 5, no. 1, pp. 150–163, 2018.
|
| 738 |
+
[6] Z. Ding, P. Fan, and H. V. Poor, “Impact of user pairing on 5G nonorthog-
|
| 739 |
+
onal multiple-access downlink transmissions,” IEEE Transactions on
|
| 740 |
+
Vehicular Technology, vol. 65, no. 8, pp. 6010–6023, 2016.
|
| 741 |
+
[7] Y. Saito, Y. Kishiyama, A. Benjebbour, T. Nakamura, A. Li, and
|
| 742 |
+
K. Higuchi, “Non-orthogonal multiple access (NOMA) for cellular future
|
| 743 |
+
radio access,” in 2013 IEEE 77th Vehicular Technology Conference (VTC
|
| 744 |
+
Spring), 2013, pp. 1–5.
|
| 745 |
+
[8] K. Au, L. Zhang, H. Nikopour, E. Yi, A. Bayesteh, U. Vilaipornsawai,
|
| 746 |
+
J. Ma, and P. Zhu, “Uplink contention based SCMA for 5G radio access,”
|
| 747 |
+
in 2014 IEEE Globecom Workshops (GC Wkshps), 2014, pp. 900–905.
|
| 748 |
+
[9] L. Liu, E. G. Larsson, W. Yu, P. Popovski, C. Stefanovic, and E. de
|
| 749 |
+
Carvalho, “Sparse signal processing for grant-free massive connectivity:
|
| 750 |
+
A future paradigm for random access protocols in the Internet of Things,”
|
| 751 |
+
IEEE Signal Processing Magazine, vol. 35, no. 5, pp. 88–99, Sep. 2018.
|
| 752 |
+
[10] S. Verdu et al., Multiuser detection.
|
| 753 |
+
Cambridge university press, 1998.
|
| 754 |
+
[11] Y. Zhang, Q. Guo, Z. Wang, J. Xi, and N. Wu, “Block sparse bayesian
|
| 755 |
+
learning based joint user activity detection and channel estimation for
|
| 756 |
+
grant-free noma systems,” IEEE Transactions on Vehicular Technology,
|
| 757 |
+
vol. 67, no. 10, pp. 9631–9640, 2018.
|
| 758 |
+
[12] H. Zhu and G. B. Giannakis, “Exploiting sparse user activity in multiuser
|
| 759 |
+
detection,” IEEE Transactions on Communications, vol. 59, no. 2, pp.
|
| 760 |
+
454–465, Feb. 2011.
|
| 761 |
+
[13] H. F. Schepker, C. Bockelmann, and A. Dekorsy, “Coping with CDMA
|
| 762 |
+
asynchronicity in compressive sensing multi-user detection,” in 2013
|
| 763 |
+
IEEE 77th Vehicular Technology Conference (VTC Spring), Jun. 2013,
|
| 764 |
+
pp. 1–5.
|
| 765 |
+
[14] Z. Chen, F. Sohrabi, and W. Yu, “Sparse activity detection for massive
|
| 766 |
+
connectivity,” IEEE Transactions on Signal Processing, vol. 66, no. 7,
|
| 767 |
+
pp. 1890–1904, Apr. 2018.
|
| 768 |
+
[15] K. Takeuchi, T. Tanaka, and T. Kawabata, “Performance improvement
|
| 769 |
+
of iterative multiuser detection for large sparsely spread CDMA systems
|
| 770 |
+
by spatial coupling,” IEEE Transactions on Information Theory, vol. 61,
|
| 771 |
+
no. 4, pp. 1768–1794, Apr. 2015.
|
| 772 |
+
[16] Y. Wang, X. Zhu, E. G. Lim, Z. Wei, Y. Liu, and Y. Jiang, “Compressive
|
| 773 |
+
sensing based user activity detection and channel estimation in uplink
|
| 774 |
+
noma systems,” in 2020 IEEE Wireless Communications and Networking
|
| 775 |
+
Conference (WCNC).
|
| 776 |
+
IEEE, 2020, pp. 1–6.
|
| 777 |
+
[17] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning.
|
| 778 |
+
MIT press
|
| 779 |
+
Cambridge, 2016, vol. 1.
|
| 780 |
+
[18] M. Mohammadkarimi, M. Mehrabi, M. Ardakani, and Y. Jing, “Deep
|
| 781 |
+
learning based sphere decoding,” IEEE Trans. Wireless Commun., pp.
|
| 782 |
+
1–1, 2019.
|
| 783 |
+
[19] X. Miao, D. Guo, and X. Li, “Grant-free NOMA with device activity
|
| 784 |
+
learning using long short-term memory,” IEEE Wireless Communications
|
| 785 |
+
Letters, pp. 1–1, 2020.
|
| 786 |
+
[20] W. Zhang, R. K. Mallik, and K. B. Letaief, “Cooperative spectrum sensing
|
| 787 |
+
optimization in cognitive radio networks,” in 2008 IEEE International
|
| 788 |
+
Conference on Communications, 2008, pp. 3411–3415.
|
| 789 |
+
[21] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
|
| 790 |
+
arXiv preprint arXiv:1412.6980, 2014.
|
| 791 |
+
[22] T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal
|
| 792 |
+
recovery with noise,” IEEE Transactions on Information theory, vol. 57,
|
| 793 |
+
no. 7, pp. 4680–4688, 2011.
|
| 794 |
+
[23] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algo-
|
| 795 |
+
rithms for compressed sensing,” Proceedings of the National Academy of
|
| 796 |
+
Sciences, vol. 106, no. 45, pp. 18 914–18 919, 2009.
|
| 797 |
+
[24] C. Goutte and E. Gaussier, “A probabilistic interpretation of precision, re-
|
| 798 |
+
call and f-score, with implication for evaluation,” in European conference
|
| 799 |
+
on information retrieval.
|
| 800 |
+
Springer, 2005, pp. 345–359.
|
| 801 |
+
|
HdAzT4oBgHgl3EQfUvz7/content/tmp_files/load_file.txt
ADDED
|
@@ -0,0 +1,387 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf,len=386
|
| 2 |
+
page_content='Activity Detection for Grant-Free NOMA in Massive IoT Networks Mehrtash Mehrabi, Student Member, IEEE, Mostafa Mohammadkarimi, Member, IEEE, and Masoud Ardakani, Senior Member, IEEE Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, T6G 1H9, Canada Email: {mehrtash, mostafa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 3 |
+
page_content='mohammadkarimi, ardakani}@ualberta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 4 |
+
page_content='ca Abstract—Recently, grant-free transmission paradigm has been introduced for massive Internet of Things (IoT) networks to save both time and bandwidth and transmit the message with low latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 5 |
+
page_content=' In order to accurately decode the message of each device at the base station (BS), first, the active devices at each transmission frame must be identified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 6 |
+
page_content=' In this work, first we investigate the problem of activity detection as a threshold comparing problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 7 |
+
page_content=' We show the convexity of the activity detection method through analyzing its probability of error which makes it possible to find the optimal threshold for minimizing the activity detection error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 8 |
+
page_content=' Consequently, to achieve an optimum solution, we propose a deep learning (DL)-based method called convolutional neural network (CNN)-activity detection (AD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 9 |
+
page_content=' In order to make it more practical, we consider unknown and time-varying activity rate for the IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 10 |
+
page_content=' Our simulations verify that our proposed CNN-AD method can achieve higher performance compared to the existing non-Bayesian greedy-based methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 11 |
+
page_content=' This is while existing methods need to know the activity rate of IoT devices, while our method works for unknown and even time-varying activity rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 12 |
+
page_content=' Index Terms—Activity detection, IoT, deep learning, NOMA, massive MIMO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 13 |
+
page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 14 |
+
page_content=' INTRODUCTION W IRELESS technology recent advances provide massive connectivity for machines and objects resulting in the Internet of Things (IoT) [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 15 |
+
page_content=' The demand for the IoT is expected to grow drastically in the near future with numerous applications in health care systems, education, businesses and governmental services [2]–[4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 16 |
+
page_content=' As the demand for connectivity in IoT systems is growing rapidly, it is crucial to improve the spectrum efficiency [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 17 |
+
page_content=' Hence, the non-orthogonal multiple access (NOMA) has been introduced [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 18 |
+
page_content=' To address the main challenges of IoT, including access collisions and massive connectivity, NOMA allows devices to access the channel non-orthogonally by either power- domain [7] or code-domain [8] multiplexing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 19 |
+
page_content=' Meanwhile, this massive connectivity is highly affected by the conventional grant-based NOMA transmission scheme, where the exchange of control signaling between the base station (BS) and IoT devices is needed for channel access.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 20 |
+
page_content=' The excessive signaling overhead causes spectral deficiency and large transmission latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 21 |
+
page_content=' Grant-free NOMA has been introduced to make a flexible transmission mechanism for the devices and save time and bandwidth by removing the need for the exchange of control signaling between the BS and devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 22 |
+
page_content=' Hence, devices can transmit data randomly at any time slot without any request- grant procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 23 |
+
page_content=' In many IoT applications, a few devices become active for a short period of time to communicate with the BS while others are inactive [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 24 |
+
page_content=' In IoT networks with a large number of nodes each with a small probability of activity, multiuser detection (MUD) methods heavily rely on activity detection (AD) prior to detection and decoding [4], [10]–[13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 25 |
+
page_content=' For uplink transmission in IoT systems with grant-free NOMA transmission scheme, where the performance of MUD can be severely affected by the multi-access interference, the reliable detection of both activity and transmitted signal is very challenging and can be computationally expensive [10], [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 26 |
+
page_content=' There have been many studies in the literature suggesting compressive sensing (CS) methods for joint activity and data detection [12]–[16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 27 |
+
page_content=' Although CS methods can achieve a re- liable MUD, they only work in networks with sporadic traffic pattern, and are expensive in terms of computational complexity [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 28 |
+
page_content=' Recently, deep learning (DL) models have observed a lot of interests in communication systems and more specifically in signal detection [17]–[19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 29 |
+
page_content=' A study in [19] suggests to use DL for activity and data detection, however they consider a deterministic traffic pattern for the activity which is not valid in all environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 30 |
+
page_content=' In this work, we first formulate the problem of IoT activity detection as a threshold comparing problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 31 |
+
page_content=' We then analyze the probability of error of this activity detection method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 32 |
+
page_content=' Observing that this probability of error is a convex function of the decision threshold, we raise the question of finding the optimal threshold for minimizing the activity detection error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 33 |
+
page_content=' To achieve this goal, we propose a convolutional neural network (CNN)-based AD algorithm for grant-fee code-domain uplink NOMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 34 |
+
page_content=' Unlike existing CS-based AD algorithms, our solution does not need to know the exact number of active devices or even the activity rate of IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 35 |
+
page_content=' In fact, in our system model we assume a time-varying and unknown activity rate and a heterogeneous network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 36 |
+
page_content=' Simulation results verify the success the proposed algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 37 |
+
page_content=' The rest of this paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 38 |
+
page_content=' We present the system model in Section II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 39 |
+
page_content=' In Section III we formulate the device AD problem and derive its probability of error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 40 |
+
page_content=' Section IV introduces our CNN-based solution for device AD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 41 |
+
page_content=' The simulation results are presented in Section V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 42 |
+
page_content=' Finally, the paper is concluded in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 43 |
+
page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 44 |
+
page_content='01274v1 [eess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 45 |
+
page_content='SP] 23 Dec 2022 f N 1 2 tT \uf04c 2 1 s N s T \uf04c \uf04c \uf04c i-th Transmission Frame (i+1)-th Transmission Frame Channel Estimation Channel Estimation Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 46 |
+
page_content=' 1: CDMA slotted ALOHA transmission frame A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 47 |
+
page_content=' Notations Throughout this paper, (·)∗ represents the complex conjugate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 48 |
+
page_content=' Matrix transpose and Hermitian operators are shown by (·)T and (·)H, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 49 |
+
page_content=' The operator diag(b) returns a square diagonal matrix with the elements of vector b on the main diagonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 50 |
+
page_content=' Furthermore, E[·] is the statistical expectation, ˆa denotes an estimated value for a, and size of set S is shown by |S|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 51 |
+
page_content=' The constellation and m-dimensional complex spaces are denoted by D and Cm, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 52 |
+
page_content=' Finally, the circularly symmetric complex Gaussian distribution with mean vector µ and covariance matrix Σ is denoted by CN(µ, Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 53 |
+
page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 54 |
+
page_content=' SYSTEM MODEL We consider a code-division multiple access (CDMA) uplink transmission, where K IoT devices communicate with a single IoT BS equipped with M receive antennas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 55 |
+
page_content=' This commonly used model [3], [6], [19], also considers a frame structure for uplink transmission composed of a channel estimation phase followed by CDMA slotted ALOHA data transmissions as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 56 |
+
page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 57 |
+
page_content=' In each frame, let Nf short packets of length Tt = NsTs, where Ns is the number of symbols per IoT packet and Ts is the symbol duration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 58 |
+
page_content=' It is assumed that the channel is fixed during each frame, but it varies from one frame to another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 59 |
+
page_content=' The channel state information (CSI) is acquired at the BS during the channel estimation phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 60 |
+
page_content=' As it is common in massive machine-type communications (mMTC), we assume that the IoT devices are only active on occasion and transmit short data packets during each frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 61 |
+
page_content=' The activity rate of the IoT devices is denoted by Pa ∈ [0, Pmax], which is assumed to be unknown and time-varying from one packet transmission to another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 62 |
+
page_content=' Let bk ∈ A be the transmitted symbol of the k- th device and chosen from a finite alphabet A, when the k-th device is active;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 63 |
+
page_content=' otherwise, bk = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 64 |
+
page_content=' Consequently, bk can take values from an augmented alphabet ¯ A = A ∪ {0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 65 |
+
page_content=' We also denote the set of all devices and the set of active devices by St = {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 66 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 67 |
+
page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 68 |
+
page_content=' , K} and Sa, respectively, where Sa ⊂ St.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 69 |
+
page_content='1 A unique spreading code is dedicated to each IoT device which is simultaneously used for the spreading purpose and de- vice identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 70 |
+
page_content=' This removes the need for control signaling associated with IoT device identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 71 |
+
page_content=' Control signals are inefficient for short packet mMTC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 72 |
+
page_content=' The spreading sequence for the k-th IoT device is denoted by ck = [c1,k c2,k · · · cNc,k]T where ci,k ∈ {−1, +1} and Nc is the spreading factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 73 |
+
page_content=' To 1For the simplicity of notation, we remove the index of frame and packet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 74 |
+
page_content=' support a large number of devices, non-orthogonal spreading sequences are employed;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 75 |
+
page_content=' resulting in NOMA transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 76 |
+
page_content=' For a single frame, the complex channel coefficient between the k-th IoT device and the m-th receive antenna at the BS is denoted as gm,k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 77 |
+
page_content=' The active IoT device k, k ∈ Sa transmits Ns symbols denoted by bk = [bk,1, · · · , bk,Ns]T during a packet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 78 |
+
page_content=' The received baseband signal over Rayleigh flat fading channel in a single slot of the slotted ALOHA frame at the m-th receive antenna of the BS is expressed as Ym = K � k=1 gm,kckbT k + Wm, (1) where Wm ∈ CNc×Ns with wi,j ∼ CN(0, σ2 w) and E[wi,jw∗ u,v] = σ2 wδ[i − u]δ[j − v] is the additive white Gaussian noise (AWGN) matrix at the m-th receive an- tenna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 79 |
+
page_content=' The equivalent channel matrix between all IoT devices and the m-th receive antenna can be expressed as Φm = [gm,1c1, · · · , gm,KcK] ∈ CNc×K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 80 |
+
page_content=' Thus, the received packet at the m-th (m = 1, 2, · · · , M) receive antenna is given by Ym = ΦmB + Wm, (2) where B = [b1, · · · , bK]T ∈ DK×Ns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 81 |
+
page_content=' Let the total set of all IoT devices be decomposed into a finite number of disjoint groups G1, G2, · · · , GS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 82 |
+
page_content=' Within group Gj, the power of every IoT device is given by Pj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 83 |
+
page_content=' The powers of the devices are equal within each group, but differ from group to group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 84 |
+
page_content=' The fraction of devices in group Gj is therefore |Gj|/K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 85 |
+
page_content=' It is assumed that Pj is known at the BS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 86 |
+
page_content=' This configuration captures heterogeneous IoT networks, where groups of IoT devices capture different phenomenon in a given geographical area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 87 |
+
page_content=' A single group of IoT devices with equal power transmission, resulting in a homogeneous network, is also studied in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 88 |
+
page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 89 |
+
page_content=' PROBLEM FORMULATION In this section, we present the problem of IoT device AD in the cases of known CSI at the receiver and in the presence of sparse or non-sparse transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 90 |
+
page_content=' In order to detect the active devices, it is assumed that the BS is equipped with a match filter and the precoding matrix and CSI Φm is available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 91 |
+
page_content=' Before AD, the observation matrix at the m-th receive antenna ym is passed through the decorrelator to obtain Ym = ΦH mYm ∈ CK×Ns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 92 |
+
page_content=' (3) In the following, we investigate the details of the AD problem based on the Gaussian detection to show how a threshold can be computed to distinguish active IoT devices from inactive ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 93 |
+
page_content=' The output of the decorrelator receiver for the m-th receive antenna is expressed as Ym = ΦH mΦmB + ΦH mWm, = � ����� �K k=1 g∗ m,1gm,kcT 1 ckbT k + g∗ m,1cT 1 Wm �K k=1 g∗ m,2gm,kcT 2 ckbT k + g∗ m,2cT 2 Wm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 94 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 95 |
+
page_content=' �K k=1 g∗ m,Kgm,kcT KckbT k + g∗ m,KcT KWm � ����� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 96 |
+
page_content=' (4) Consequently, the received signal from the k-th user at the m-th receive antenna is rm k = ||gm,kck||2 2bT k + K � i=1(i̸=k) g∗ m,kgm,icT k cibT i +g∗ m,kcT k Wm, (5) where the second and third terms are multi user interference and additive noise, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 97 |
+
page_content=' Since an IoT device is either active or inactive for the entire packet transmission, we determine the activity status of a device based on each received symbol and then use the results in [20] for spectrum sensing and combine the obtained results from all Ns symbols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 98 |
+
page_content=' The device AD in the case of single symbol transmission is studied in [12], and we follow that to determine the status of each device based on each received symbol and then combine the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 99 |
+
page_content=' The j-th received symbol from device k at receive antenna m, denoted as rm k,j, is rm k,j =||gm,kck||2 2bk,j+ K � i=1(i̸=k) g∗ m,kgm,icT k cibi,j + g∗ m,kcT k wj, (6) where the first term is the main signal, the second term is multi user interference from other devices, and the third term is the additive noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 100 |
+
page_content=' For the sake of simplicity we assume that BPSK modulation is used, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 101 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 102 |
+
page_content=', the transmitted symbols are drawn from A = {−1, +1} and p(bk,j = +1) = p(bk,j = −1) = 1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 103 |
+
page_content=' The multi user interference plus noise in rm k,j has variance σ2 k,j = var � K � i=1(i̸=k) g∗ m,kgm,icT k cibi,j + g∗ m,kcT k wj � = K � i=1(i̸=k) |g∗ m,kgm,icT k ci|2Pa + ||g∗ m,kcT k ||2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 104 |
+
page_content=' (7) Now we can approximate rm k,j by a Gaussian distribution as N(||gm,kck||2 2, σ2 k,j) [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 105 |
+
page_content=' In order to identify the activity of device k, our goal is to propose an algorithm to define threshold τ and set device k as active if |rm k,j| > τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 106 |
+
page_content=' Then the probability of error, Pe, is computed as P k,j e =Pap(|rm k,j| < τ|bk,j ̸= 0) + 2(1 − Pa)p(|rm k,j| > τ|bk,j = 0), (8) where we have p(rm k,j|bk,j ̸= 0) ∼ N(||gm,kck||2 2, σ2 k,j) and p(rm k,j|bk,j = 0) ∼ N(0, σ2 k,j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 107 |
+
page_content=' We can rewrite (8) as P k,j e = 2(1 − Pa)Q( τ σk,j ) + PaQ(||gm,kck||2 2 − τ σk,j ), (9) where Q(x) = (1/ √ 2π) � ∞ x exp(−t2/2)dt denotes the Gaus- sian tail function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 108 |
+
page_content=' The probability of error in (9) is a convex function of τ and hence, a fine tuned neural network is capable of solving this problem and detect the active devices by finding the optimum τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 109 |
+
page_content=' In the following section, we define our DL- based algorithm to find the optimum τ and minimize the probability of error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 110 |
+
page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 111 |
+
page_content=' DL-BASED AD Device AD is the first step toward effective MUD in a grant- free uplink multiple access.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 112 |
+
page_content=' The recent studies on AD suggest to use CS methods to identify the set of active devices [14], [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 113 |
+
page_content=' However, these methods fail in the practical scenarios, where the activity rate is time-varying and/or unknown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 114 |
+
page_content=' Moreover, these methods are mainly effective for low device activity rate scenarios, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 115 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 116 |
+
page_content=', when sparsity level is high [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 117 |
+
page_content=' In this section, we propose our AD algorithms called CNN-AD by employing a CNN for heterogeneous IoT networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 118 |
+
page_content=' By employing a suitably designed CNN, the underlying pattern in device activity can be easily learnt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 119 |
+
page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 120 |
+
page_content=' CNN-AD Algorithm Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 121 |
+
page_content=' 2 illustrates the structure of the proposed CNN-AD algo- rithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 122 |
+
page_content=' As seen, it is composed of there blocks: 1) preprocessing, 2) CNN processing, and 3) hypothesis testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 123 |
+
page_content=' In the preprocessing step after sequence matched filtering, we first sort the observation matrix from all M receive antennas in a 3D Tensor as R = � ���� � P ¯Y1 � � P ¯Y2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 124 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 125 |
+
page_content=' � P ¯YM � � ���� (10) where PYm ∈ CK×Ns, Ym = ΦH mYm ∈ CK×Ns for m = 1, 2, · · · , M, and P ≜ diag(p1, · · · , pK), pk ∈ {1/P1, · · · , 1/PS} for k = 1, 2, · · · , K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 126 |
+
page_content=' In the CNN processing block, the 3D Tensor R is used as the input of a suitable designed CNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 127 |
+
page_content=' The CNN models benefit from the convolutional layers performing convolution operations between matrices instead of multiplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 128 |
+
page_content=' Thus, it leads to dimension reduction for feature extraction and provides a new input to the next network layers which includes only the useful features of the original high-dimensional input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 129 |
+
page_content=' The IoT device AD can be formulated as a binary classification or regression problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 130 |
+
page_content=' Formulating device AD as a classification problem is straightforward, but it requires the accurate design of the CNN’s structure and parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 131 |
+
page_content=' In the hypothesis testing block, the K outputs of the CNN’s Sigmoid layer is compared with a predefined threshold to determine the activity status of the IoT devices in the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 132 |
+
page_content=' If the k-th node of the Sigmoid layer exceeds the threshold, the k-th IoT device is identified as active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 133 |
+
page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 134 |
+
page_content=' Training Phase In order to train the designed CNN, we define the activity vector a as a = [a1 a2 · · aK]T , (11) where ak is 1 when the k-th IoT device is active and 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 135 |
+
page_content=' We train our model with N independent training samples (R (j),a(j)), where j = 1, 2, · · · , N and a(j) and R (j) are the activity vector and observation matrix of the j-th training sample, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 136 |
+
page_content=' Our objective is to train the designed CNN to generate the desired output vector a(j) Preprocessing ] , , , [ 2 1 M Y Y Y \uf04c Received Message at M Antennas \uf0fa \uf0fa \uf0fa \uf0fa \uf0fb \uf0f9 \uf0ea \uf0ea \uf0ea \uf0ea \uf0eb \uf0e9 \uf03d ] [ ] [ ] [ 2 1 M Y P Y P Y P R \uf04d CNN Input M K s N CONV 3*3, stride=3, pad=same 128 kernels 128 MAX_POOL, 2*2, stride=2, 2 M 2 K FC \uf04d 1024 ReLU FC \uf04d K Sigmoid Hypothesis Testing \uf04d ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 137 |
+
page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 138 |
+
page_content='0 \uf0b3 a S 128 M K Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 139 |
+
page_content=' 2: Model structure of the proposed CNN-AD algorithm for input matrix R (j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 140 |
+
page_content=' The model tries to learns non-linear transformation Ψ such that ˆa(j) = Ψ(R (j);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 141 |
+
page_content=' Θ), (12) where Θ is the set of parameters learned during the training by minimizing the loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 142 |
+
page_content=' The output of model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 143 |
+
page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 144 |
+
page_content=' ˆa determines the activity probabilities of the IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 145 |
+
page_content=' Here since there are two classes (active or inactive) for each IoT device, the loss function is chosen as the binary cross- entropy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 146 |
+
page_content=' For each training sample j, the binary cross-entropy loss function compares the probability that the IoT devices are active (ˆa(j)) with the true activity vector a(j) as Loss(Θ) = 1 N N � j=1 − � a(j) log(ˆa(j))+(1−a(j)) log(1−ˆa(j)) � , (13) where log(·) performs an element-wise log operation on ˆa(j), and the vector multiplication is also element-wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 147 |
+
page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 148 |
+
page_content=' EXPERIMENTS In this section, we evaluate the performance of the proposed CNN-AD algorithm through various simulation experiments and compare it with some of the existing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 149 |
+
page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 150 |
+
page_content=' Simulation Setup We consider an IoT network with K devices where K > Nc and pseudo-random codes are used as the spreading sequences for IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 151 |
+
page_content=' The probability of activity Pa is considered to be unknown and time-varying from one packet to another in the range of Pa ∈ [0, Pmax], where Pmax = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 152 |
+
page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 153 |
+
page_content=' The BPSK modulation is used for uplink transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 154 |
+
page_content=' Without loss of generality, the channel coefficient between IoT devices and the BS is modeled as independent zero-mean complex Gaussian random variables with variance σ2 k,m = 1, k ∈ St and m ∈ {1, · · · , M}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 155 |
+
page_content=' The additive white noise is modeled as zero-mean complex Gaussian random variables with variance σ2 w, and the signal-to-noise ratio (SNR) in dB is defined as γ ≜ 10 log(σ2 s /σ2 w), where σ2 s = PaPt is the average transmit power with Pt = �K k=1 pk as the total transmission power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 156 |
+
page_content=' Unless otherwise mentioned, we consider spreading sequences with spreading factor Nc = 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 157 |
+
page_content=' In order to train CNN-AD, we generate 105 independent samples and use 80% for training and the rest for validation and test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 158 |
+
page_content=' Adam optimizer [21] with learning rate of 10−3 is used to minimize cross-entropy loss function in (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 159 |
+
page_content=' 0 2 4 6 8 10 12 14 16 18 20 SNR 10 3 10 2 10 1 AER OMP (Uniform Power) AMP (Uniform Power) CNN_Based (Uniform Power) OMP (non-Uniform Power) AMP (non-Uniform Power) CNN_Based (non-Uniform Power) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 160 |
+
page_content=' 3: Achieved BER with MMSE with a priory AD of OMP, AMP, and CNN-AD without knowing the number of active devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 161 |
+
page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 162 |
+
page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 163 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 164 |
+
page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 165 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 166 |
+
page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 167 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 168 |
+
page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 169 |
+
page_content='8 Activity Rate 10 2 10 1 BER OMP (Uniform Power) AMP (Uniform Power) CNN-AD (Uniform Power) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 170 |
+
page_content=' 4: Impact of Pa on the performance of different methods as the priory AD for MMSE in terms of achieved BER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 171 |
+
page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 172 |
+
page_content=' Simulation Results 1) Performance Evaluation of CNN-AD: We assess CNN- AD through various simulations and compare it with the exist- ing CS-based methods including orthogonal matching pursuit (OMP) [22] and approximate message passing (AMP) [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 173 |
+
page_content=' The impact of SNR on the activity error rate (AER) achieved by different AD algorithms in both homogeneous and hetero- geneous IoT networks with uniform and non-uniform power allocation is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 174 |
+
page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 175 |
+
page_content=' The AER of different methods are compared for a wide range of SNRs in an IoT system with total K = 40 IoT devices and a single BS with M = 100 receive antennas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 176 |
+
page_content=' As expected, the AER of all AD algorithms decreases with increasing SNR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 177 |
+
page_content=' However, CNN-AD achieves IoT Device Model Precision Recall F1-score OMP 28% 32% 30% Device A AMP 31% 35% 33% CNN-AD 73% 92% 81% OMP 33% 32% 32% Device B AMP 38% 35% 36% CNN-AD 100% 83% 91% TABLE I: Performance analysis different algorithms for two typical IoT devices for Pmax = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 178 |
+
page_content='1 at γ = 10 dB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 179 |
+
page_content=' the best performance since unlike the non-Bayesian greedy algorithms OMP and AMP, our method relies on the statistical distributions of device activities and channels and exploit them in the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 180 |
+
page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 181 |
+
page_content=' 4 illustrates the effect of activity rate on the bit error rate (BER) for minimum mean square error (MMSE)-MUD with different AD algorithms at γ = 10 dB SNR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 182 |
+
page_content=' As seen, as the activity rate increases, the number of active devices also increases accordingly and thus it becomes difficult to detect all the active devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 183 |
+
page_content=' This results in a higher BER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 184 |
+
page_content=' We use Pmax = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 185 |
+
page_content='1 to train CNN-AD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 186 |
+
page_content=' Thus, the MMSE-MUD with CNN-AD shows performance degradation for the activity rates of larger than Pmax = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 187 |
+
page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 188 |
+
page_content=' However, it still outperforms the performance of the MMSE-MUD with OMP and AMP AD algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 189 |
+
page_content=' It should be mentioned that this performance improves when CNN-AD is trained for a larger value of Pmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 190 |
+
page_content=' We further investigate the AD algorithms in terms of other metrics for two typical IoT devices for Pmax = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 191 |
+
page_content='1 at γ = 10 dB SNR, presented in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 192 |
+
page_content=' In this table we compare the precision, recall, and F1-score, defined in [24], achieved by CNN-AD with OMP and AMP AD algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 193 |
+
page_content=' As seen, all metrics are improved by using CNN-AD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 194 |
+
page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 195 |
+
page_content=' CONCLUSIONS In this paper, we consider the problem of AD in IoT networks in grant-free NOMA systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 196 |
+
page_content=' Based on the application, IoT devices can be inactive for a long period of time and only active in the time of transmission to the BS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 197 |
+
page_content=' Hence, identifying the active devices is required for an accurate data detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 198 |
+
page_content=' Some studies propose CS-based method for AD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 199 |
+
page_content=' However, high level of message sparsity is necessary for those methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 200 |
+
page_content=' In order to remove this need and exploit the statistical properties of the channels we propose a CNN-based method called CNN-AD to detect active IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 201 |
+
page_content=' Comparison with available methods shows the strength of our algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 202 |
+
page_content=' ACKNOWLEDGMENT The study presented in this paper is supported by Alberta Innovates and Natural Sciences and Engineering Research Council of Canada (NSERC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 203 |
+
page_content=' REFERENCES [1] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 204 |
+
page_content=' Durisi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 205 |
+
page_content=' Koch, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 206 |
+
page_content=' Popovski, “Toward massive, ultrareliable, and low-latency wireless communication with short packets,” Proceedings of the IEEE, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 207 |
+
page_content=' 104, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 208 |
+
page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 209 |
+
page_content=' 1711–1726, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 210 |
+
page_content=' [2] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 211 |
+
page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 212 |
+
page_content=' Xu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 213 |
+
page_content=' He, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 214 |
+
page_content=' Li, “Internet of things in industries: A survey,” IEEE Transactions on Industrial Informatics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 215 |
+
page_content=' 10, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 216 |
+
page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 217 |
+
page_content=' 2233– 2243, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 218 |
+
page_content=' [3] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 219 |
+
page_content=' Al-Fuqaha, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 220 |
+
page_content=' Guizani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 221 |
+
page_content=' Mohammadi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 222 |
+
page_content=' Aledhari, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 223 |
+
page_content=' Ayyash, “Internet of Things: A survey on enabling technologies, protocols, and applications,” IEEE Communications Surveys Tutorials, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 224 |
+
page_content=' 17, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 225 |
+
page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 226 |
+
page_content=' 2347–2376, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 227 |
+
page_content=' [4] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 228 |
+
page_content=' Bockelmann, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 229 |
+
page_content=' Pratas, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 230 |
+
page_content=' Nikopour, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 231 |
+
page_content=' Au, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 232 |
+
page_content=' Svensson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 233 |
+
page_content=' Ste- fanovic, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 234 |
+
page_content=' Popovski, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 235 |
+
page_content=' Dekorsy, “Massive machine-type communi- cations in 5G: Physical and MAC-layer solutions,” IEEE Communications Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 236 |
+
page_content=' 54, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 237 |
+
page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 238 |
+
page_content=' 59–65, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 239 |
+
page_content=' [5] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 240 |
+
page_content=' Ejaz and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 241 |
+
page_content=' Ibnkahla, “Multiband spectrum sensing and resource allocation for IoT in cognitive 5G networks,” IEEE Internet of Things Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 242 |
+
page_content=' 5, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 243 |
+
page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 244 |
+
page_content=' 150–163, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 245 |
+
page_content=' [6] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 246 |
+
page_content=' Ding, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 247 |
+
page_content=' Fan, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 248 |
+
page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 249 |
+
page_content=' Poor, “Impact of user pairing on 5G nonorthog- onal multiple-access downlink transmissions,” IEEE Transactions on Vehicular Technology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 250 |
+
page_content=' 65, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 251 |
+
page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 252 |
+
page_content=' 6010–6023, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 253 |
+
page_content=' [7] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 254 |
+
page_content=' Saito, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 255 |
+
page_content=' Kishiyama, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 256 |
+
page_content=' Benjebbour, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 257 |
+
page_content=' Nakamura, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 258 |
+
page_content=' Li, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 259 |
+
page_content=' Higuchi, “Non-orthogonal multiple access (NOMA) for cellular future radio access,” in 2013 IEEE 77th Vehicular Technology Conference (VTC Spring), 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 260 |
+
page_content=' 1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 261 |
+
page_content=' [8] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 262 |
+
page_content=' Au, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 263 |
+
page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 264 |
+
page_content=' Nikopour, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 265 |
+
page_content=' Yi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 266 |
+
page_content=' Bayesteh, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 267 |
+
page_content=' Vilaipornsawai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 268 |
+
page_content=' Ma, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 269 |
+
page_content=' Zhu, “Uplink contention based SCMA for 5G radio access,” in 2014 IEEE Globecom Workshops (GC Wkshps), 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 270 |
+
page_content=' 900–905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 271 |
+
page_content=' [9] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 272 |
+
page_content=' Liu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 273 |
+
page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 274 |
+
page_content=' Larsson, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 275 |
+
page_content=' Yu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 276 |
+
page_content=' Popovski, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 277 |
+
page_content=' Stefanovic, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 278 |
+
page_content=' de Carvalho, “Sparse signal processing for grant-free massive connectivity: A future paradigm for random access protocols in the Internet of Things,” IEEE Signal Processing Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 279 |
+
page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 280 |
+
page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 281 |
+
page_content=' 88–99, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 282 |
+
page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 283 |
+
page_content=' [10] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 284 |
+
page_content=' Verdu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 285 |
+
page_content=', Multiuser detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 286 |
+
page_content=' Cambridge university press, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 287 |
+
page_content=' [11] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 288 |
+
page_content=' Zhang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 289 |
+
page_content=' Guo, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 290 |
+
page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 291 |
+
page_content=' Xi, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 292 |
+
page_content=' Wu, “Block sparse bayesian learning based joint user activity detection and channel estimation for grant-free noma systems,” IEEE Transactions on Vehicular Technology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 293 |
+
page_content=' 67, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 294 |
+
page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 295 |
+
page_content=' 9631–9640, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 296 |
+
page_content=' [12] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 297 |
+
page_content=' Zhu and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 298 |
+
page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 299 |
+
page_content=' Giannakis, “Exploiting sparse user activity in multiuser detection,” IEEE Transactions on Communications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 300 |
+
page_content=' 59, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 301 |
+
page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 302 |
+
page_content=' 454–465, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 303 |
+
page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 304 |
+
page_content=' [13] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 305 |
+
page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 306 |
+
page_content=' Schepker, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 307 |
+
page_content=' Bockelmann, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 308 |
+
page_content=' Dekorsy, “Coping with CDMA asynchronicity in compressive sensing multi-user detection,” in 2013 IEEE 77th Vehicular Technology Conference (VTC Spring), Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 309 |
+
page_content=' 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 310 |
+
page_content=' 1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 311 |
+
page_content=' [14] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 312 |
+
page_content=' Chen, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 313 |
+
page_content=' Sohrabi, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 314 |
+
page_content=' Yu, “Sparse activity detection for massive connectivity,” IEEE Transactions on Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 315 |
+
page_content=' 66, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 316 |
+
page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 317 |
+
page_content=' 1890–1904, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 318 |
+
page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 319 |
+
page_content=' [15] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 320 |
+
page_content=' Takeuchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 321 |
+
page_content=' Tanaka, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 322 |
+
page_content=' Kawabata, “Performance improvement of iterative multiuser detection for large sparsely spread CDMA systems by spatial coupling,” IEEE Transactions on Information Theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 323 |
+
page_content=' 61, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 324 |
+
page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 325 |
+
page_content=' 1768–1794, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 326 |
+
page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 327 |
+
page_content=' [16] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 328 |
+
page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 329 |
+
page_content=' Zhu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 330 |
+
page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 331 |
+
page_content=' Lim, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 332 |
+
page_content=' Wei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 333 |
+
page_content=' Liu, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 334 |
+
page_content=' Jiang, “Compressive sensing based user activity detection and channel estimation in uplink noma systems,” in 2020 IEEE Wireless Communications and Networking Conference (WCNC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 335 |
+
page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 336 |
+
page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 337 |
+
page_content=' [17] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 338 |
+
page_content=' Goodfellow, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 339 |
+
page_content=' Bengio, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 340 |
+
page_content=' Courville, Deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 341 |
+
page_content=' MIT press Cambridge, 2016, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 342 |
+
page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 343 |
+
page_content=' [18] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 344 |
+
page_content=' Mohammadkarimi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 345 |
+
page_content=' Mehrabi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 346 |
+
page_content=' Ardakani, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 347 |
+
page_content=' Jing, “Deep learning based sphere decoding,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 348 |
+
page_content=' Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 349 |
+
page_content=', pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 350 |
+
page_content=' 1–1, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 351 |
+
page_content=' [19] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 352 |
+
page_content=' Miao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 353 |
+
page_content=' Guo, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 354 |
+
page_content=' Li, “Grant-free NOMA with device activity learning using long short-term memory,” IEEE Wireless Communications Letters, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 355 |
+
page_content=' 1–1, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 356 |
+
page_content=' [20] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 357 |
+
page_content=' Zhang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 358 |
+
page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 359 |
+
page_content=' Mallik, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 360 |
+
page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 361 |
+
page_content=' Letaief, “Cooperative spectrum sensing optimization in cognitive radio networks,” in 2008 IEEE International Conference on Communications, 2008, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 362 |
+
page_content=' 3411–3415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 363 |
+
page_content=' [21] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 364 |
+
page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 365 |
+
page_content=' Kingma and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 366 |
+
page_content=' Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 367 |
+
page_content='6980, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 368 |
+
page_content=' [22] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 369 |
+
page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 370 |
+
page_content=' Cai and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 371 |
+
page_content=' Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Transactions on Information theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 372 |
+
page_content=' 57, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 373 |
+
page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 374 |
+
page_content=' 4680–4688, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 375 |
+
page_content=' [23] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 376 |
+
page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 377 |
+
page_content=' Donoho, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 378 |
+
page_content=' Maleki, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 379 |
+
page_content=' Montanari, “Message-passing algo- rithms for compressed sensing,” Proceedings of the National Academy of Sciences, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 380 |
+
page_content=' 106, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 381 |
+
page_content=' 45, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 382 |
+
page_content=' 18 914–18 919, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 383 |
+
page_content=' [24] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 384 |
+
page_content=' Goutte and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 385 |
+
page_content=' Gaussier, “A probabilistic interpretation of precision, re- call and f-score, with implication for evaluation,” in European conference on information retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 386 |
+
page_content=' Springer, 2005, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
| 387 |
+
page_content=' 345–359.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'}
|
JNFIT4oBgHgl3EQfZCuh/content/2301.11251v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:615d060741c38e062c0ac71fb4ed52ad7f261a247a90b42f7044d142d49de416
|
| 3 |
+
size 5642271
|
JdE2T4oBgHgl3EQfUgew/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:15cc58fa3787a33564790da1b006ea35075e5166608ec5539f64846362378e9e
|
| 3 |
+
size 655405
|
KdFOT4oBgHgl3EQfzDRI/content/2301.12930v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2d7423488e44e8f2586f6c1205cc378cd6d115fc280a8580b628733d4d97d429
|
| 3 |
+
size 1666446
|
MNE4T4oBgHgl3EQfiw29/content/tmp_files/2301.05137v1.pdf.txt
ADDED
|
@@ -0,0 +1,2136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Springer Nature 2021 LATEX template
|
| 2 |
+
Density functions of periodic sequences of continuous events
|
| 3 |
+
Olga Anosova1 and Vitaliy Kurlin1*
|
| 4 |
+
1*Computer Science, University of Liverpool, Ashton street, Liverpool, L69 3BX, UK.
|
| 5 |
+
*Corresponding author(s). E-mail(s): vitaliy.kurlin@gmail.com;
|
| 6 |
+
Contributing authors: oanosova@liverpool.ac.uk;
|
| 7 |
+
Abstract
|
| 8 |
+
Periodic Geometry studies isometry invariants of periodic point sets that are also continuous under
|
| 9 |
+
perturbations. The motivations come from periodic crystals whose structures are determined in
|
| 10 |
+
a rigid form but any minimal cells can discontinuously change due to small noise in measure-
|
| 11 |
+
ments. For any integer k ≥ 0, the density function of a periodic set S was previously defined
|
| 12 |
+
as the fractional volume of all k-fold intersections (within a minimal cell) of balls that have a
|
| 13 |
+
variable radius t and centers at all points of S. This paper introduces the density functions for
|
| 14 |
+
periodic sets of points with different initial radii motivated by atomic radii of chemical elements
|
| 15 |
+
and by continuous events occupying disjoint intervals in time series. The contributions are explicit
|
| 16 |
+
descriptions of the densities for periodic sequences of intervals. The new densities are strictly
|
| 17 |
+
stronger and distinguish periodic sequences that have identical densities in the case of zero radii.
|
| 18 |
+
Keywords: computational geometry, periodic set, periodic time series, isometry invariant, density function
|
| 19 |
+
MSC Classification: 68U05 , 51K05 , 51N20 , 51F30 , 51F20
|
| 20 |
+
1 Motivations for the density
|
| 21 |
+
functions of periodic sets
|
| 22 |
+
This work substantially extends the previous
|
| 23 |
+
conference paper [3] in Discrete Geometry and
|
| 24 |
+
Mathematical Morphology 2022. The past work
|
| 25 |
+
explicitly described the density functions for peri-
|
| 26 |
+
odic sequences of zero-sized points. The new work
|
| 27 |
+
extends these analytic descriptions to periodic
|
| 28 |
+
sequences whose points have non-negative radii.
|
| 29 |
+
The proposed extension to the weighted case is
|
| 30 |
+
motivated by crystallography and materials chem-
|
| 31 |
+
istry [1] because all chemical elements have differ-
|
| 32 |
+
ent atomic radii. In dimension 1, the key motiva-
|
| 33 |
+
tion is the study of periodic time series consisting
|
| 34 |
+
of continuous and sequential (non-overlapping)
|
| 35 |
+
events represented by disjoint intervals. Any such
|
| 36 |
+
interval [a, b] ⊂ R for a ≤ b is the one-dimensional
|
| 37 |
+
ball with the center a + b
|
| 38 |
+
2
|
| 39 |
+
and radius b − a
|
| 40 |
+
2
|
| 41 |
+
.
|
| 42 |
+
The point-set representation of periodic crys-
|
| 43 |
+
tals is the most fundamental mathematical model
|
| 44 |
+
for crystalline materials because nuclei of atoms
|
| 45 |
+
are well-defined physical objects, while chemical
|
| 46 |
+
bonds are not real sticks or strings but abstractly
|
| 47 |
+
represent inter-atomic interactions depending on
|
| 48 |
+
many thresholds for distances and angles.
|
| 49 |
+
Since crystal structures are determined in a
|
| 50 |
+
rigid form, their most practical equivalence is rigid
|
| 51 |
+
motion (a composition of translations and rota-
|
| 52 |
+
tions) or isometry that maintains all inter-point
|
| 53 |
+
distances and includes also mirror reflections [20].
|
| 54 |
+
Now we introduce the key concepts. Let Rn be
|
| 55 |
+
Euclidean space, Z be the set of all integers.
|
| 56 |
+
1
|
| 57 |
+
arXiv:2301.05137v1 [cs.CG] 12 Jan 2023
|
| 58 |
+
|
| 59 |
+
Springer Nature 2021 LATEX template
|
| 60 |
+
2
|
| 61 |
+
Density functions of periodic sequences
|
| 62 |
+
0
|
| 63 |
+
0.2
|
| 64 |
+
0.4
|
| 65 |
+
0.6
|
| 66 |
+
0.8
|
| 67 |
+
1
|
| 68 |
+
0
|
| 69 |
+
0.2
|
| 70 |
+
0.4
|
| 71 |
+
0.6
|
| 72 |
+
0.8
|
| 73 |
+
1
|
| 74 |
+
1.2
|
| 75 |
+
1.4
|
| 76 |
+
1.6
|
| 77 |
+
ψkA(t)
|
| 78 |
+
Radius of Balls
|
| 79 |
+
ψ0A
|
| 80 |
+
ψ1A
|
| 81 |
+
ψ2A
|
| 82 |
+
ψ3A
|
| 83 |
+
ψ4A
|
| 84 |
+
ψ5A
|
| 85 |
+
ψ6A
|
| 86 |
+
ψ7A
|
| 87 |
+
ψ8A
|
| 88 |
+
0
|
| 89 |
+
0.2
|
| 90 |
+
0.4
|
| 91 |
+
0.6
|
| 92 |
+
0.8
|
| 93 |
+
1
|
| 94 |
+
0
|
| 95 |
+
0.2
|
| 96 |
+
0.4
|
| 97 |
+
0.6
|
| 98 |
+
0.8
|
| 99 |
+
1
|
| 100 |
+
1.2
|
| 101 |
+
1.4
|
| 102 |
+
1.6
|
| 103 |
+
Σ1nψkA(t)
|
| 104 |
+
Radius of Balls
|
| 105 |
+
n = 1
|
| 106 |
+
n = 2
|
| 107 |
+
n = 3
|
| 108 |
+
n = 4
|
| 109 |
+
n = 5
|
| 110 |
+
n = 6
|
| 111 |
+
n = 7
|
| 112 |
+
n = 8
|
| 113 |
+
Fig. 1 Illustration of Definition 1.2 for the hexagonal lattice. Left: subregions Uk(t) are covered by k disks for the radii
|
| 114 |
+
t = 0.25, 0.55, 0.75, 1. Right: the densities ψk are above the corresponding densigram of accumulated functions
|
| 115 |
+
k
|
| 116 |
+
�
|
| 117 |
+
i=1
|
| 118 |
+
ψi(t).
|
| 119 |
+
Definition 1.1 (a lattice Λ, a unit cell U, a
|
| 120 |
+
motif M, a periodic point set S = M + Λ).
|
| 121 |
+
For any linear basis v1, . . . , vn of Rn, a lattice
|
| 122 |
+
is Λ
|
| 123 |
+
=
|
| 124 |
+
{
|
| 125 |
+
n�
|
| 126 |
+
i=1
|
| 127 |
+
civi
|
| 128 |
+
:
|
| 129 |
+
ci
|
| 130 |
+
∈
|
| 131 |
+
Z}. The unit cell
|
| 132 |
+
U(v1, . . . , vn) =
|
| 133 |
+
� n�
|
| 134 |
+
i=1
|
| 135 |
+
civi : ci ∈ [0, 1)
|
| 136 |
+
�
|
| 137 |
+
is the par-
|
| 138 |
+
allelepiped spanned by the basis above. A motif
|
| 139 |
+
M ⊂ U is any finite set of points p1, . . . , pm ∈ U.
|
| 140 |
+
A periodic point set [20] is the Minkowski sum
|
| 141 |
+
S = M + Λ = {u + v | u ∈ M, v ∈ Λ}.
|
| 142 |
+
■
|
| 143 |
+
In dimension n = 1, a lattice is defined by any
|
| 144 |
+
non-zero vector v ∈ R, any periodic point set S
|
| 145 |
+
is a periodic sequence {p1, . . . , pm} + vZ with the
|
| 146 |
+
period v equal to the length of the vector v.
|
| 147 |
+
Definition 1.2 (density functions for periodic
|
| 148 |
+
sets of points with radii). Let a periodic set S =
|
| 149 |
+
Λ + M ⊂ Rn have a unit cell U. For every point
|
| 150 |
+
p ∈ M, fix a radius r(p) ≥ 0. For any integer
|
| 151 |
+
k ≥ 0, let Uk(t) be the region within the cell U
|
| 152 |
+
covered by exactly k closed balls ¯B(p; r(p) + t)
|
| 153 |
+
for t ≥ 0 and all points p ∈ M and their transla-
|
| 154 |
+
tions by Λ. The k-th density function ψk[S](t) =
|
| 155 |
+
Vol[Uk(t)]/Vol[U] is the fractional volume of the
|
| 156 |
+
k-fold intersections of these balls within U.
|
| 157 |
+
■
|
| 158 |
+
The density ψk[S](t) can be interpreted as the
|
| 159 |
+
probability that a random (uniformly chosen in U)
|
| 160 |
+
point q is at a maximum distance t to exactly k
|
| 161 |
+
balls with initial radii r(p) and all centers p ∈ S.
|
| 162 |
+
For k = 0, the 0-th density ψ0[S](t) mea-
|
| 163 |
+
sures the fractional volume of the empty space not
|
| 164 |
+
covered by any expanding balls ¯B(p; r(p) + t)
|
| 165 |
+
In the simplest case of radii r(p) = 0, the infi-
|
| 166 |
+
nite sequence Ψ[S] = {ψk(t)}+∞
|
| 167 |
+
k=0 was called in
|
| 168 |
+
[8, section 3] the density fingerprint of a periodic
|
| 169 |
+
point set S. For k = 1 and small t > 0 while
|
| 170 |
+
all equal-sized balls ¯B(p; t) remain disjoint, the
|
| 171 |
+
1st density ψ1[S](t) increases proportionally to tn
|
| 172 |
+
but later reaches a maximum and eventually drops
|
| 173 |
+
back to 0 when all points of Rn are covered of by at
|
| 174 |
+
least two balls. See the densities ψk, k = 0, . . . , 8
|
| 175 |
+
for the square and hexagonal lattices in [8, Fig. 2].
|
| 176 |
+
The original densities helped find a missing
|
| 177 |
+
crystal in the Cambridge Structural Database,
|
| 178 |
+
which was accidentally confused with a slight per-
|
| 179 |
+
turbation (measured at a different temperature)
|
| 180 |
+
of another crystal (polymorph) with the same
|
| 181 |
+
chemical composition, see [8, section 7].
|
| 182 |
+
The new weighted case with radii r(p) ≥ 0 in
|
| 183 |
+
Definition 1.2 is even more practically important
|
| 184 |
+
due to different Van der Waals radii, which are
|
| 185 |
+
individually defined for all chemical elements.
|
| 186 |
+
The key advantage of density functions over
|
| 187 |
+
other isometry invariants of periodic crystals
|
| 188 |
+
|
| 189 |
+
Springer Nature 2021 LATEX template
|
| 190 |
+
Density functions of periodic sequences
|
| 191 |
+
3
|
| 192 |
+
(such as symmetries or conventional representa-
|
| 193 |
+
tions based on a geometry of a minimal cell) is
|
| 194 |
+
their continuity under perturbations, see details in
|
| 195 |
+
section 2 reviewing the related past work.
|
| 196 |
+
The only limitation is the infinite size of den-
|
| 197 |
+
sities ψk(t) due to the unbounded parameters:
|
| 198 |
+
integer index k ≥ 0 and continuous radius t ≥ 0.
|
| 199 |
+
We state the following problem in full general-
|
| 200 |
+
ity to motivate future work on these densities.
|
| 201 |
+
Problem 1.3 (computation of ψk). Verify if the
|
| 202 |
+
density functions ψk[S](t) from Definition 1.2 can
|
| 203 |
+
be computed in a polynomial time (in the size m
|
| 204 |
+
of a motif of S) for a fixed dimension n.
|
| 205 |
+
■
|
| 206 |
+
The main contribution is the full solution of
|
| 207 |
+
Problem 1.3 for n = 1. Theorems 3.2, 4.2, 5.2, 6.2,
|
| 208 |
+
and Corollary 6.4 efficiently compute all ψk[S](t)
|
| 209 |
+
depending on infinitely many values of k and t.
|
| 210 |
+
2 Review of related past work
|
| 211 |
+
Periodic Geometry was initiated in 2020 by the
|
| 212 |
+
problem [14, section 2.3] to design a computable
|
| 213 |
+
metric on isometry classes of lattices, which is
|
| 214 |
+
continuous under perturbations of a lattice basis.
|
| 215 |
+
Though a Voronoi domain is combinatorially
|
| 216 |
+
unstable under perturbations, its geometric shape
|
| 217 |
+
was used to introduce two continuous metrics [14,
|
| 218 |
+
Theorems 2, 4] requiring approximations due to a
|
| 219 |
+
minimization over infinitely many rotations.
|
| 220 |
+
Similar minimizations over rotations or other
|
| 221 |
+
continuous parameters are required for the com-
|
| 222 |
+
plete invariant isosets [2, 4] and density functions,
|
| 223 |
+
which can be practically computed in low dimen-
|
| 224 |
+
sions [16], whose completeness was proved for
|
| 225 |
+
generic periodic point sets in R3 [8, Theorem 2].
|
| 226 |
+
The density fingerprint Ψ[S] turned out to be
|
| 227 |
+
incomplete [8, section 5] in the example below.
|
| 228 |
+
Example 2.1 (periodic sequences S15, Q15 ⊂ R).
|
| 229 |
+
Widdowson et al. [20, Appendix B] discussed
|
| 230 |
+
homometric sets that can be distinguished by
|
| 231 |
+
the invariant AMD (Average Minimum Distances)
|
| 232 |
+
and not by diffraction patterns. The sequences
|
| 233 |
+
S15 = {0, 1, 3, 4, 5, 7, 9, 10, 12} + 15Z,
|
| 234 |
+
Q15 = {0, 1, 3, 4, 6, 8, 9, 12, 14} + 15Z
|
| 235 |
+
have the unit cell [0, 15] shown as a circle in Fig. 2.
|
| 236 |
+
Fig. 2 Circular versions of the periodic sets S15, Q15.
|
| 237 |
+
These periodic sequences [9] are obtained as
|
| 238 |
+
Minkowski sums S15 = U + V + 15Z and Q15 =
|
| 239 |
+
U − V + 15Z for U = {0, 4, 9}, V = {0, 1, 3}.
|
| 240 |
+
■
|
| 241 |
+
For rational-valued periodic sequences, [9,
|
| 242 |
+
Theorem 4] proved that r-th order invariants
|
| 243 |
+
(combinations of r-factor products) up to r = 6
|
| 244 |
+
are enough to distinguish such sequences up to a
|
| 245 |
+
shift (a rigid motion of R without reflections).
|
| 246 |
+
The AMD invariant was extended to the Point-
|
| 247 |
+
wise Distance Distribution (PDD), whose generic
|
| 248 |
+
completeness [19, Theorem 4.4] was proved in
|
| 249 |
+
any dimension n ≥ 1. However there are finite
|
| 250 |
+
sets in R3 [15, Fig. S4] with the same PDD,
|
| 251 |
+
which were distinguished by more sophisticated
|
| 252 |
+
distance-based invariants in [18, appendix C].
|
| 253 |
+
The
|
| 254 |
+
subarea
|
| 255 |
+
of
|
| 256 |
+
Lattice
|
| 257 |
+
Geometry
|
| 258 |
+
devel-
|
| 259 |
+
oped continuous parameterizations for the moduli
|
| 260 |
+
spaces of lattices considered up to isometry in
|
| 261 |
+
dimension two [7, 13] and three [6, 10].
|
| 262 |
+
For 1-periodic sequences of points in Rn, com-
|
| 263 |
+
plete isometry invariants with continuous and
|
| 264 |
+
computable metrics appeared in [12], see related
|
| 265 |
+
results for finite clouds of unlabeled points [11, 17].
|
| 266 |
+
3 The 0-th density function ψ0
|
| 267 |
+
This section proves Theorem 3.2 explicitly describ-
|
| 268 |
+
ing the 0-th density function ψ0[S](t) for any
|
| 269 |
+
periodic sequence S ⊂ R of disjoint intervals.
|
| 270 |
+
For convenience, scale any periodic sequence
|
| 271 |
+
S to period 1 so that S is given by points
|
| 272 |
+
0 ≤ p1 < · · · < pm < 1 with radii r1, . . . , rm,
|
| 273 |
+
respectively. Since the expanding balls in R are
|
| 274 |
+
growing intervals, volumes of their intersections
|
| 275 |
+
linearly change with respect to the variable radius
|
| 276 |
+
t. Hence any density function ψk(t) is piecewise
|
| 277 |
+
linear and uniquely determined by corner points
|
| 278 |
+
(aj, bj) where the gradient of ψk(t) changes.
|
| 279 |
+
|
| 280 |
+
Springer Nature 2021 LATEX template
|
| 281 |
+
4
|
| 282 |
+
Density functions of periodic sequences
|
| 283 |
+
To prepare the proof of Theorem 3.2, we first
|
| 284 |
+
consider Example 3.1 for the simple sequence S.
|
| 285 |
+
Example 3.1 (0-th density function ψ0). Let the
|
| 286 |
+
periodic sequence S =
|
| 287 |
+
�
|
| 288 |
+
0, 1
|
| 289 |
+
3, 1
|
| 290 |
+
2
|
| 291 |
+
�
|
| 292 |
+
+ Z have three
|
| 293 |
+
points p1 = 0, p2 = 1
|
| 294 |
+
3, p3 = 1
|
| 295 |
+
2 of radii r1 = 1
|
| 296 |
+
12,
|
| 297 |
+
r2 = 0, r3 = 1
|
| 298 |
+
12, respectively. Fig. 3 shows each
|
| 299 |
+
point pi and its growing interval
|
| 300 |
+
Li(t) = [(pi−ri)−t, (pi+ri)+t] of the length 2ri+2t
|
| 301 |
+
for i = 1, 2, 3 in its own color: red, green, blue.
|
| 302 |
+
By
|
| 303 |
+
Definition
|
| 304 |
+
1.2
|
| 305 |
+
each
|
| 306 |
+
density
|
| 307 |
+
function
|
| 308 |
+
ψk[S](t) measures a fractional length covered by
|
| 309 |
+
exactly k intervals within the unit cell [0, 1]. We
|
| 310 |
+
periodicaly map the endpoints of each growing
|
| 311 |
+
interval to the unit cell [0, 1]. For instance, the
|
| 312 |
+
interval [− 1
|
| 313 |
+
12 − t, 1
|
| 314 |
+
12 + t] of the point p1 = 0 ≡ 1
|
| 315 |
+
(mod 1) maps to the red intervals [0, 1
|
| 316 |
+
12 +t]∪[11
|
| 317 |
+
12 −
|
| 318 |
+
t, 1] shown by solid red lines in Fig. 3. The same
|
| 319 |
+
image shows the green interval [1
|
| 320 |
+
3 − t, 1
|
| 321 |
+
3 + t] by
|
| 322 |
+
dashed lines and the blue interval [ 5
|
| 323 |
+
12 − t, 7
|
| 324 |
+
12 + t]
|
| 325 |
+
by dotted lines.
|
| 326 |
+
At the moment t = 0, since the starting inter-
|
| 327 |
+
vals are disjoint, they cover the length l = 2( 1
|
| 328 |
+
12 +
|
| 329 |
+
0 + 1
|
| 330 |
+
12) = 1
|
| 331 |
+
3. The non-covered part of [0, 1] has
|
| 332 |
+
length 1 − 1
|
| 333 |
+
3 = 2
|
| 334 |
+
3. So the graph of ψ0(t) at t = 0
|
| 335 |
+
starts from the point (0, 2
|
| 336 |
+
3), see Fig. 4.
|
| 337 |
+
At the first critical moment t = 1
|
| 338 |
+
24 when the
|
| 339 |
+
green and blue intervals collide at p = 3
|
| 340 |
+
8, only
|
| 341 |
+
the intervals [1
|
| 342 |
+
8, 7
|
| 343 |
+
24] ∪ [5
|
| 344 |
+
8, 7
|
| 345 |
+
8] of total length
|
| 346 |
+
5
|
| 347 |
+
12
|
| 348 |
+
remain uncovered. Hence ψ0(t) linearly drops to
|
| 349 |
+
the point ( 1
|
| 350 |
+
12, 5
|
| 351 |
+
12). At the next critical moment
|
| 352 |
+
t = 1
|
| 353 |
+
8 when the red and green intervals collide at
|
| 354 |
+
p =
|
| 355 |
+
5
|
| 356 |
+
24, only the interval [17
|
| 357 |
+
24, 19
|
| 358 |
+
24] of length
|
| 359 |
+
1
|
| 360 |
+
12
|
| 361 |
+
remain uncovered, so ψ0(t) continues to (1
|
| 362 |
+
8, 1
|
| 363 |
+
12).
|
| 364 |
+
The graph of ψ0(t) finally returns to the t-axis
|
| 365 |
+
at the point (1
|
| 366 |
+
6, 0) and remains there for t ≥ 1
|
| 367 |
+
6.
|
| 368 |
+
The piecewise linear behavior of ψ0(t) can be
|
| 369 |
+
described by specifying the corner points in Fig. 4:
|
| 370 |
+
�
|
| 371 |
+
0, 2
|
| 372 |
+
3
|
| 373 |
+
�
|
| 374 |
+
,
|
| 375 |
+
� 1
|
| 376 |
+
24, 5
|
| 377 |
+
12
|
| 378 |
+
�
|
| 379 |
+
,
|
| 380 |
+
�1
|
| 381 |
+
8, 1
|
| 382 |
+
12
|
| 383 |
+
�
|
| 384 |
+
,
|
| 385 |
+
�1
|
| 386 |
+
6, 0
|
| 387 |
+
�
|
| 388 |
+
.
|
| 389 |
+
■
|
| 390 |
+
Theorem 3.2 extends Example 3.1 to any peri-
|
| 391 |
+
odic sequence S and implies that the 0-th density
|
| 392 |
+
function ψ0(t) is uniquely determined by the
|
| 393 |
+
ordered gap lengths between successive intervals.
|
| 394 |
+
Theorem 3.2 (description of ψ0). Let a periodic
|
| 395 |
+
sequence S = {p1, . . . , pm} + Z consist of disjoint
|
| 396 |
+
intervals with centers 0 ≤ p1 < · · · < pm < 1 and
|
| 397 |
+
radii r1, . . . , rm ≥ 0. Consider the total length l =
|
| 398 |
+
2
|
| 399 |
+
m
|
| 400 |
+
�
|
| 401 |
+
i=1
|
| 402 |
+
ri and gaps between successive intervals gi =
|
| 403 |
+
(pi − ri) − (pi−1 + ri−1), where i = 1, . . . , m and
|
| 404 |
+
p0 = pm − 1, r0 = rm. Put the gaps in increasing
|
| 405 |
+
order: g[1] ≤ g[2] ≤ · · · ≤ g[m].
|
| 406 |
+
Then
|
| 407 |
+
the
|
| 408 |
+
0-th
|
| 409 |
+
density
|
| 410 |
+
ψ0[S](t)
|
| 411 |
+
is
|
| 412 |
+
piecewise
|
| 413 |
+
linear
|
| 414 |
+
with
|
| 415 |
+
the
|
| 416 |
+
following
|
| 417 |
+
(unordered)
|
| 418 |
+
corner
|
| 419 |
+
points:
|
| 420 |
+
(0, 1 − l)
|
| 421 |
+
and
|
| 422 |
+
�
|
| 423 |
+
g[i]
|
| 424 |
+
2 , 1 − l −
|
| 425 |
+
i−1
|
| 426 |
+
�
|
| 427 |
+
j=1
|
| 428 |
+
g[j] − (m − i + 1)g[i]
|
| 429 |
+
�
|
| 430 |
+
for
|
| 431 |
+
i = 1, . . . , m, so the last corner is
|
| 432 |
+
�g[m]
|
| 433 |
+
2 , 0
|
| 434 |
+
�
|
| 435 |
+
.
|
| 436 |
+
If any corners are repeated, e.g. when g[i−1] =
|
| 437 |
+
g[i], these corners are collapsed into one corner. ■
|
| 438 |
+
Proof By Definition 1.2 the 0-th density function
|
| 439 |
+
ψ0(t) measures the total length of subintervals in the
|
| 440 |
+
unit cell [0, 1] that are not covered by any of the grow-
|
| 441 |
+
ing intervals Li(t) = [pi−ri−t, pi+ri+t], i = 1, . . . , m.
|
| 442 |
+
For t = 0, since all initial intervals Li(0) are disjoint,
|
| 443 |
+
they cover the total length 2
|
| 444 |
+
m
|
| 445 |
+
�
|
| 446 |
+
i=1
|
| 447 |
+
ri = l.
|
| 448 |
+
Then the graph of ψ0(t) at t = 0 starts from the
|
| 449 |
+
point (0, 1 − l). So ψ0(t) linearly decreases from the
|
| 450 |
+
initial value ψ0(0) = 1 − l except for m critical values
|
| 451 |
+
of t where one of the gap intervals [pi + ri + t, pi+1 −
|
| 452 |
+
ri+1−t] between successive growing intervals Li(t) and
|
| 453 |
+
Li+1(t) shrinks to a point. These critical radii t are
|
| 454 |
+
ordered according to the gaps g[1] ≤ g[2] ≤ · · · ≤ g[m].
|
| 455 |
+
The first critical radius is t =
|
| 456 |
+
1
|
| 457 |
+
2g[1], when a
|
| 458 |
+
shortest gap interval of the length g[1] is covered
|
| 459 |
+
by the growing successive intervals. At this moment
|
| 460 |
+
|
| 461 |
+
Springer Nature 2021 LATEX template
|
| 462 |
+
Density functions of periodic sequences
|
| 463 |
+
5
|
| 464 |
+
Fig. 3 The sequence S =
|
| 465 |
+
�
|
| 466 |
+
0, 1
|
| 467 |
+
3 , 1
|
| 468 |
+
2
|
| 469 |
+
�
|
| 470 |
+
+ Z has the points of weights
|
| 471 |
+
1
|
| 472 |
+
12 , 0, 1
|
| 473 |
+
12 , respectively. The growing intervals around
|
| 474 |
+
the red point 0 ≡ 1 (mod 1), green point 1
|
| 475 |
+
3 , blue point 1
|
| 476 |
+
2 have the same color for various radii t, see Examples 3.1, 4.1, 5.1.
|
| 477 |
+
t = 1
|
| 478 |
+
2g[1], all m growing intervals Li(t) have the total
|
| 479 |
+
length l + mg[1]. Then the 0-th density ψ0(t) has the
|
| 480 |
+
first corner points (0, 1−l) and
|
| 481 |
+
�g[1]
|
| 482 |
+
2 , 1 − l − mg[1]
|
| 483 |
+
�
|
| 484 |
+
.
|
| 485 |
+
The second critical radius is t
|
| 486 |
+
=
|
| 487 |
+
g[2]
|
| 488 |
+
2 , when
|
| 489 |
+
all
|
| 490 |
+
intervals
|
| 491 |
+
Li(t)
|
| 492 |
+
have
|
| 493 |
+
the
|
| 494 |
+
total
|
| 495 |
+
length
|
| 496 |
+
l +
|
| 497 |
+
g[1] + (m − 1)g[2], i.e. the next corner point is
|
| 498 |
+
�g[2]
|
| 499 |
+
2 , 1 − l − g[1] − (m − 1)g[2]
|
| 500 |
+
�
|
| 501 |
+
. If g[1] = g[2], then
|
| 502 |
+
both corner points coincide, so ψ0(t) will continue
|
| 503 |
+
from the joint corner point.
|
| 504 |
+
The above pattern generalizes to the i-th critical
|
| 505 |
+
radius t = 1
|
| 506 |
+
2g[i], when all covered intervals have the
|
| 507 |
+
total length
|
| 508 |
+
i−1
|
| 509 |
+
�
|
| 510 |
+
j=1
|
| 511 |
+
g[j] (for the fully covered intervals)
|
| 512 |
+
plus (m − i + 1)g[i] (for the still growing intervals).
|
| 513 |
+
For the final critical radius t =
|
| 514 |
+
g[m]
|
| 515 |
+
2 , the whole
|
| 516 |
+
unit cell [0, 1] is covered by the grown intervals because
|
| 517 |
+
m
|
| 518 |
+
�
|
| 519 |
+
j=1
|
| 520 |
+
g[j] = 1 − l. The final corner is (
|
| 521 |
+
g[m]
|
| 522 |
+
2 , 0).
|
| 523 |
+
□
|
| 524 |
+
Example 3.3 applies Theorem 3.2 to get ψ0
|
| 525 |
+
found for the periodic sequence S in Example 3.1.
|
| 526 |
+
|
| 527 |
+
1/3Springer Nature 2021 LATEX template
|
| 528 |
+
6
|
| 529 |
+
Density functions of periodic sequences
|
| 530 |
+
Fig. 4 The 0-th density function ψ0(t) for the 1-period
|
| 531 |
+
sequence S whose points 0, 1
|
| 532 |
+
3 , 1
|
| 533 |
+
2
|
| 534 |
+
have radii
|
| 535 |
+
1
|
| 536 |
+
12 , 0, 1
|
| 537 |
+
12 ,
|
| 538 |
+
respectively, see Example 3.1.
|
| 539 |
+
Example 3.3 (using Theorem 3.2). The sequence
|
| 540 |
+
S =
|
| 541 |
+
�
|
| 542 |
+
0, 1
|
| 543 |
+
3, 1
|
| 544 |
+
2
|
| 545 |
+
�
|
| 546 |
+
+ Z in Example 3.1 with points
|
| 547 |
+
p1 = 0, p2 = 1
|
| 548 |
+
3, p3 = 1
|
| 549 |
+
2 of radii r1 = 1
|
| 550 |
+
12, r2 = 0,
|
| 551 |
+
r3 = 1
|
| 552 |
+
12, respectively, has l = 2(r1 + r2 + r3) = 1
|
| 553 |
+
3
|
| 554 |
+
and the initial gaps between successive intervals
|
| 555 |
+
g1 = p1 − r1 − p3 − r3 = (1 − 1
|
| 556 |
+
12) − (1
|
| 557 |
+
2 + 1
|
| 558 |
+
12) = 1
|
| 559 |
+
3,
|
| 560 |
+
g2 = p2 − r2 − p1 − r1 = (1
|
| 561 |
+
3 − 0) − (0 + 1
|
| 562 |
+
12) = 1
|
| 563 |
+
4,
|
| 564 |
+
g3 = p3 − r3 − p2 − r2 = (1
|
| 565 |
+
2 − 1
|
| 566 |
+
12) − (1
|
| 567 |
+
3 + 0) = 1
|
| 568 |
+
12.
|
| 569 |
+
Order the gaps: g[1] = 1
|
| 570 |
+
12 < g[2] = 1
|
| 571 |
+
4 < g[3] = 1
|
| 572 |
+
3.
|
| 573 |
+
1 − l = 1 − 1
|
| 574 |
+
3 = 2
|
| 575 |
+
3,
|
| 576 |
+
1 − l − 3g[1] = 2
|
| 577 |
+
3 − 3
|
| 578 |
+
12 = 5
|
| 579 |
+
12,
|
| 580 |
+
1 − l − g[1] − 2g[2] = 2
|
| 581 |
+
3 − 1
|
| 582 |
+
12 − 2
|
| 583 |
+
4 = 1
|
| 584 |
+
12,
|
| 585 |
+
1 − l − g[1] − g[2] − g[3] = 2
|
| 586 |
+
3 − 1
|
| 587 |
+
12 − 1
|
| 588 |
+
4 − 1
|
| 589 |
+
3 = 0.
|
| 590 |
+
By Theorem 3.2 ψ0(t) has the corner points
|
| 591 |
+
(0, 1 − l) =
|
| 592 |
+
�
|
| 593 |
+
0, 2
|
| 594 |
+
3
|
| 595 |
+
�
|
| 596 |
+
,
|
| 597 |
+
�1
|
| 598 |
+
2g[1], 1 − l − 3g[1]
|
| 599 |
+
�
|
| 600 |
+
=
|
| 601 |
+
� 1
|
| 602 |
+
24, 5
|
| 603 |
+
12
|
| 604 |
+
�
|
| 605 |
+
,
|
| 606 |
+
�1
|
| 607 |
+
2g[2], 1 − l − g[1] − 2g[2]
|
| 608 |
+
�
|
| 609 |
+
=
|
| 610 |
+
�1
|
| 611 |
+
8, 1
|
| 612 |
+
12
|
| 613 |
+
�
|
| 614 |
+
,
|
| 615 |
+
�1
|
| 616 |
+
2g[3], 1 − l − g[1] − g[2] − g[3]
|
| 617 |
+
�
|
| 618 |
+
=
|
| 619 |
+
�1
|
| 620 |
+
6, 0
|
| 621 |
+
�
|
| 622 |
+
. See
|
| 623 |
+
the graph of the 0-th density ψ0(t) in Fig. 4.
|
| 624 |
+
■
|
| 625 |
+
By Theorem 3.2 any 0-th density function
|
| 626 |
+
ψ0(t) is uniquely determined by the (unordered)
|
| 627 |
+
set of gap lengths between successive intervals.
|
| 628 |
+
Hence we can re-order these intervals with-
|
| 629 |
+
out changing ψ0(t). For instance, the periodic
|
| 630 |
+
sequence Q = {0, 1
|
| 631 |
+
2, 2
|
| 632 |
+
3} + Z with points 0, 1
|
| 633 |
+
2, 2
|
| 634 |
+
3 of
|
| 635 |
+
weights
|
| 636 |
+
1
|
| 637 |
+
12, 1
|
| 638 |
+
12, 0 has the same set ordered gaps
|
| 639 |
+
g[1] =
|
| 640 |
+
1
|
| 641 |
+
12, d[2] = 1
|
| 642 |
+
3, d[3] = 1
|
| 643 |
+
2 as the periodic
|
| 644 |
+
sequence S =
|
| 645 |
+
�
|
| 646 |
+
0, 1
|
| 647 |
+
3, 1
|
| 648 |
+
2
|
| 649 |
+
�
|
| 650 |
+
+ Z in Example 3.1.
|
| 651 |
+
The above sequences S, Q are related by the
|
| 652 |
+
mirror reflection t
|
| 653 |
+
�→
|
| 654 |
+
1 − t. One can eas-
|
| 655 |
+
ily construct many non-isometric sequences with
|
| 656 |
+
ψ0[S](t) = ψ0[Q](t). For any 1 ≤ i ≤ m − 3,
|
| 657 |
+
the sequences Sm,i = {0, 2, 3, . . . , i + 2, i + 4, i +
|
| 658 |
+
5, . . . , m + 2} + (m + 2)Z have the same interval
|
| 659 |
+
lengths d[1] = · · · = d[m−2] = 1, d[m−1] = d[m] = 2
|
| 660 |
+
but are not related by isometry (translations and
|
| 661 |
+
reflections in R) because the intervals of length 2
|
| 662 |
+
are separated by i−1 intervals of length 1 in Sm,i.
|
| 663 |
+
4 The 1st density function ψ1
|
| 664 |
+
This section proves Theorem 4.2 explicitly describ-
|
| 665 |
+
ing the 1st density ψ1[S](t) for any periodic
|
| 666 |
+
sequence S of disjoint intervals. To prepare the
|
| 667 |
+
proof of Theorem 4.2, Example 4.1 finds ψ1[S] for
|
| 668 |
+
the sequence S from Example 3.1.
|
| 669 |
+
Example 4.1 (ψ1 for S =
|
| 670 |
+
�
|
| 671 |
+
0, 1
|
| 672 |
+
3, 1
|
| 673 |
+
2
|
| 674 |
+
�
|
| 675 |
+
+ Z). The
|
| 676 |
+
1st density function ψ1(t) can be obtained as a
|
| 677 |
+
sum of the three trapezoid functions ηR, ηG, ηB,
|
| 678 |
+
each measuring the length of a region covered by
|
| 679 |
+
a single interval of one color, see Fig. 3.
|
| 680 |
+
At the initial moment t = 0, the red intervals
|
| 681 |
+
[0, 1
|
| 682 |
+
12] ∪ [11
|
| 683 |
+
12, 1] have the total length ηR(0) = 1
|
| 684 |
+
6.
|
| 685 |
+
These red intervals [0, 1
|
| 686 |
+
12 + t] ∪ [11
|
| 687 |
+
12 − t, 1] for
|
| 688 |
+
t ∈ [0, 1
|
| 689 |
+
8] grow until they touch the green interval
|
| 690 |
+
[ 7
|
| 691 |
+
24, 3
|
| 692 |
+
8] and have the total length ηR(1
|
| 693 |
+
8) = 1
|
| 694 |
+
6 + 2
|
| 695 |
+
8 =
|
| 696 |
+
|
| 697 |
+
Springer Nature 2021 LATEX template
|
| 698 |
+
Density functions of periodic sequences
|
| 699 |
+
7
|
| 700 |
+
Fig. 5 The trapezoid functions ηR, ηG, ηB and the 1st
|
| 701 |
+
density function ψ1(t) for the 1-period sequence S whose
|
| 702 |
+
points 0, 1
|
| 703 |
+
3 , 1
|
| 704 |
+
2 have radii 1
|
| 705 |
+
12 , 0, 1
|
| 706 |
+
12 , see Example 4.1.
|
| 707 |
+
5
|
| 708 |
+
12 in the second picture of Fig. 3. So the graph of
|
| 709 |
+
the red length ηR(t) linearly grows with gradient 2
|
| 710 |
+
from the point (0, 1
|
| 711 |
+
6) to the corner point (1
|
| 712 |
+
8, 5
|
| 713 |
+
12).
|
| 714 |
+
For t ∈ [1
|
| 715 |
+
8, 1
|
| 716 |
+
6], the left red interval is shrink-
|
| 717 |
+
ing at the same rate (due to the overlapping green
|
| 718 |
+
interval) as the right red interval continues to grow
|
| 719 |
+
until t = 1
|
| 720 |
+
6, when it touches the blue interval
|
| 721 |
+
[1
|
| 722 |
+
4, 3
|
| 723 |
+
4]. Hence the graph of ηR(t) remains constant
|
| 724 |
+
for t ∈ [1
|
| 725 |
+
8, 1
|
| 726 |
+
6] up to the corner point (1
|
| 727 |
+
6, 5
|
| 728 |
+
12).
|
| 729 |
+
After
|
| 730 |
+
that,
|
| 731 |
+
the
|
| 732 |
+
graph
|
| 733 |
+
of
|
| 734 |
+
ηR(t)
|
| 735 |
+
linearly
|
| 736 |
+
decreases (with gradient −2) until all red intervals
|
| 737 |
+
are fully covered by the green and blue intervals
|
| 738 |
+
at moment t = 3
|
| 739 |
+
8, see the 6th picture in Fig. 3.
|
| 740 |
+
Hence the trapezoid function ηR has the piece-
|
| 741 |
+
wise linear graph through the corner points (0, 1
|
| 742 |
+
6),
|
| 743 |
+
(1
|
| 744 |
+
8, 5
|
| 745 |
+
12), (1
|
| 746 |
+
6, 5
|
| 747 |
+
12), (3
|
| 748 |
+
8, 0). After that, ηR(t) = 0
|
| 749 |
+
remains constant for t ≥ 3
|
| 750 |
+
8. Fig. 5 shows the
|
| 751 |
+
graphs of ηR, ηG, ηB and ψ1 = ηR + ηG + ηB.
|
| 752 |
+
■
|
| 753 |
+
Theorem 4.2 extends Example 4.1 and proves
|
| 754 |
+
that any ψ1(t) is a sum of trapezoid functions
|
| 755 |
+
whose corners are explicitly described. We con-
|
| 756 |
+
sider any index i = 1, . . . , m (of a point pi or a
|
| 757 |
+
gap gi) modulo m so that m + 1 ≡ 1 (mod m).
|
| 758 |
+
Theorem 4.2 (description of ψ1). Let a periodic
|
| 759 |
+
sequence S = {p1, . . . , pm} + Z consist of disjoint
|
| 760 |
+
intervals with centers 0 ≤ p1 < · · · < pm < 1 and
|
| 761 |
+
radii r1, . . . , rm ≥ 0, respectively.
|
| 762 |
+
Consider the gaps gi = (pi−ri)−(pi−1+ri−1),
|
| 763 |
+
where i = 1, . . . , m and p0 = pm − 1, r0 = rm.
|
| 764 |
+
Then the 1st density ψ1(t) is the sum of m
|
| 765 |
+
trapezoid functions ηi, i = 1, . . . , m, with the
|
| 766 |
+
corners (0, 2ri),
|
| 767 |
+
�gi
|
| 768 |
+
2 , g + 2ri
|
| 769 |
+
�
|
| 770 |
+
,
|
| 771 |
+
�gi+1
|
| 772 |
+
2 , g + 2ri
|
| 773 |
+
�
|
| 774 |
+
,
|
| 775 |
+
�gi + gi+1
|
| 776 |
+
2
|
| 777 |
+
+ ri, 0
|
| 778 |
+
�
|
| 779 |
+
, where g = min{gi, gi+1}.
|
| 780 |
+
Hence ψ1(t) is determined by the unordered
|
| 781 |
+
set of unordered pairs (gi, gi+1), i = 1, . . . , m.
|
| 782 |
+
■
|
| 783 |
+
Proof The 1st density ψ1(t) equals the total length
|
| 784 |
+
of subregions covered by exactly one of the intervals
|
| 785 |
+
|
| 786 |
+
RSpringer Nature 2021 LATEX template
|
| 787 |
+
8
|
| 788 |
+
Density functions of periodic sequences
|
| 789 |
+
Li(t) = [pi − ri − t, pi + ri + t], i = 1, . . . , m, where all
|
| 790 |
+
intervals are taken modulo 1 within [0, 1].
|
| 791 |
+
Hence ψ1(t) is the sum of the functions η1i, each
|
| 792 |
+
measuring the length of the subinterval of Li(t) not
|
| 793 |
+
covered by other intervals Lj(t), j ∈ {1, . . . , m}−{i}.
|
| 794 |
+
Since the initial intervals Li(0) are disjoint, each
|
| 795 |
+
function η1i(t) starts from the value η1i(0) = 2ri and
|
| 796 |
+
linearly grows (with gradient 2) up to ηi(1
|
| 797 |
+
2g) = 2ri+g,
|
| 798 |
+
where g = min{gi, gi+1}, when the growing interval
|
| 799 |
+
Li(t) of the length 2ri+2t = 2ri+g touches its closest
|
| 800 |
+
neighboring interval Li±1(t) with a shortest gap g.
|
| 801 |
+
If (say) gi < gi+1, then the subinterval covered
|
| 802 |
+
only by Li(t) is shrinking on the left and is grow-
|
| 803 |
+
ing at the same rate on the right until Li(t) touches
|
| 804 |
+
the growing interval Li+1(t) on the right. During
|
| 805 |
+
this growth, when t is between 1
|
| 806 |
+
2gi and 1
|
| 807 |
+
2gi+1, the
|
| 808 |
+
trapezoid function ηi(t) = g remains constant.
|
| 809 |
+
If gi = gi+1, this horizontal line collapses to one
|
| 810 |
+
point in the graph of ηi(t). For t ≥ max{gi, gi+1},
|
| 811 |
+
the subinterval covered only by Li(t) is shrinking on
|
| 812 |
+
both sides until the neighboring intervals Li±1(t) meet
|
| 813 |
+
at a mid-point between their initial closest endpoints
|
| 814 |
+
pi−1 + ri−1 and pi+1 − ri+1. This meeting time is
|
| 815 |
+
t = 1
|
| 816 |
+
2(pi+1 −ri+1 −pi−1 −ri−1) = 1
|
| 817 |
+
2(gi +2ri +gi+1),
|
| 818 |
+
which is also illustrated by Fig. 6. So the trapezoid
|
| 819 |
+
function ηi has the corners (0, 2ri),
|
| 820 |
+
�gi
|
| 821 |
+
2 , 2ri + g
|
| 822 |
+
�
|
| 823 |
+
,
|
| 824 |
+
�gi+1
|
| 825 |
+
2
|
| 826 |
+
, 2ri + g
|
| 827 |
+
�
|
| 828 |
+
,
|
| 829 |
+
�gi + gi+1
|
| 830 |
+
2
|
| 831 |
+
+ ri, 0
|
| 832 |
+
�
|
| 833 |
+
as expected.
|
| 834 |
+
□
|
| 835 |
+
Example 4.3 applies Theorem 4.2 to get ψ1
|
| 836 |
+
found for the periodic sequence S in Example 4.1.
|
| 837 |
+
Example 4.3 (using Theorem 4.2 for ψ1). The
|
| 838 |
+
sequence S =
|
| 839 |
+
�
|
| 840 |
+
0, 1
|
| 841 |
+
3, 1
|
| 842 |
+
2
|
| 843 |
+
�
|
| 844 |
+
+ Z in Example 4.1 with
|
| 845 |
+
points p1 = 0, p2 = 1
|
| 846 |
+
3, p3 = 1
|
| 847 |
+
2 of radii r1 = 1
|
| 848 |
+
12,
|
| 849 |
+
r2 = 0, r3 = 1
|
| 850 |
+
12, respectively, has the initial gaps
|
| 851 |
+
between successive intervals g1 = 1
|
| 852 |
+
3, g2 = 1
|
| 853 |
+
4, g3 =
|
| 854 |
+
1
|
| 855 |
+
12, see all the computations in Example 3.3.
|
| 856 |
+
Case (R). In Theorem 4.2 for the trapezoid func-
|
| 857 |
+
tion ηR = η1 measuring the fractional length
|
| 858 |
+
covered only by the red interval, we set k = 1 and
|
| 859 |
+
i = 1. Then ri = 1
|
| 860 |
+
12, gi = 1
|
| 861 |
+
3 and gi+1 = 1
|
| 862 |
+
4, so
|
| 863 |
+
gi + gi+1
|
| 864 |
+
2
|
| 865 |
+
+ ri = 1
|
| 866 |
+
2
|
| 867 |
+
�1
|
| 868 |
+
3 + 1
|
| 869 |
+
4
|
| 870 |
+
�
|
| 871 |
+
+ 1
|
| 872 |
+
12 = 3
|
| 873 |
+
8,
|
| 874 |
+
g = min{gi, gi+1} = 1
|
| 875 |
+
4, g + 2ri = 1
|
| 876 |
+
4 + 2
|
| 877 |
+
12 = 5
|
| 878 |
+
12.
|
| 879 |
+
Then ηR = η1 has the following corner points:
|
| 880 |
+
(0, 2ri) =
|
| 881 |
+
�
|
| 882 |
+
0, 1
|
| 883 |
+
6
|
| 884 |
+
�
|
| 885 |
+
,
|
| 886 |
+
�gi
|
| 887 |
+
2 , g + 2ri
|
| 888 |
+
�
|
| 889 |
+
=
|
| 890 |
+
�1
|
| 891 |
+
6, 5
|
| 892 |
+
12
|
| 893 |
+
�
|
| 894 |
+
,
|
| 895 |
+
�gi+1
|
| 896 |
+
2 , g + 2ri
|
| 897 |
+
�
|
| 898 |
+
=
|
| 899 |
+
�1
|
| 900 |
+
8, 5
|
| 901 |
+
12
|
| 902 |
+
�
|
| 903 |
+
,
|
| 904 |
+
�gi + gi+1
|
| 905 |
+
2
|
| 906 |
+
+ ri, 0
|
| 907 |
+
�
|
| 908 |
+
=
|
| 909 |
+
�3
|
| 910 |
+
8, 0
|
| 911 |
+
�
|
| 912 |
+
,
|
| 913 |
+
where the two middle corners are accidentally
|
| 914 |
+
swapped due to gi > gi+1 but they define the same
|
| 915 |
+
trapezoid function as in the first picture of Fig. 5.
|
| 916 |
+
Case (G). In Theorem 4.2 for the trapezoid func-
|
| 917 |
+
tion ηG = η2 measuring the fractional length
|
| 918 |
+
covered only by the green interval, we set k = 1
|
| 919 |
+
and i = 2. Then ri = 0, gi = 1
|
| 920 |
+
4 and gi+1 = 1
|
| 921 |
+
12, so
|
| 922 |
+
gi + gi+1
|
| 923 |
+
2
|
| 924 |
+
+ ri = 1
|
| 925 |
+
2
|
| 926 |
+
�1
|
| 927 |
+
4 + 1
|
| 928 |
+
12
|
| 929 |
+
�
|
| 930 |
+
+ 0 = 1
|
| 931 |
+
6,
|
| 932 |
+
g = min{gi, gi+1} = 1
|
| 933 |
+
12, g + 2ri = 1
|
| 934 |
+
12 + 0 = 1
|
| 935 |
+
12.
|
| 936 |
+
Then ηG = η2 has the following corner points
|
| 937 |
+
exactly as shown in the second picture of Fig. 5:
|
| 938 |
+
(0, 2ri) = (0, 0) ,
|
| 939 |
+
�gi
|
| 940 |
+
2 , g + 2ri
|
| 941 |
+
�
|
| 942 |
+
=
|
| 943 |
+
�1
|
| 944 |
+
8, 1
|
| 945 |
+
12
|
| 946 |
+
�
|
| 947 |
+
,
|
| 948 |
+
�gi+1
|
| 949 |
+
2 , g + 2ri
|
| 950 |
+
�
|
| 951 |
+
=
|
| 952 |
+
� 1
|
| 953 |
+
24, 5
|
| 954 |
+
12
|
| 955 |
+
�
|
| 956 |
+
,
|
| 957 |
+
�gi + gi+1
|
| 958 |
+
2
|
| 959 |
+
+ ri, 0
|
| 960 |
+
�
|
| 961 |
+
=
|
| 962 |
+
�1
|
| 963 |
+
6, 0
|
| 964 |
+
�
|
| 965 |
+
.
|
| 966 |
+
Case (B). In Theorem 4.2 for the trapezoid func-
|
| 967 |
+
tion ηB = η3 measuring the fractional length
|
| 968 |
+
covered only by the blue interval, we set k = 1 and
|
| 969 |
+
|
| 970 |
+
Springer Nature 2021 LATEX template
|
| 971 |
+
Density functions of periodic sequences
|
| 972 |
+
9
|
| 973 |
+
Fig. 6 The distances g, s, g′ between line intervals used in the proofs of Theorems 4.2 and 5.2, shown here for k = 3.
|
| 974 |
+
i = 3. Then ri = 1
|
| 975 |
+
12, gi = 1
|
| 976 |
+
12 and gi+1 = 1
|
| 977 |
+
3, so
|
| 978 |
+
gi + gi+1
|
| 979 |
+
2
|
| 980 |
+
+ ri = 1
|
| 981 |
+
2
|
| 982 |
+
� 1
|
| 983 |
+
12 + 1
|
| 984 |
+
3
|
| 985 |
+
�
|
| 986 |
+
+ 1
|
| 987 |
+
12 = 7
|
| 988 |
+
24,
|
| 989 |
+
g = min{gi, gi+1} = 1
|
| 990 |
+
12, g + 2ri = 1
|
| 991 |
+
12 + 2
|
| 992 |
+
12 = 1
|
| 993 |
+
4.
|
| 994 |
+
Then ηB = η3 has the following corner points:
|
| 995 |
+
(0, 2ri) =
|
| 996 |
+
�
|
| 997 |
+
0, 1
|
| 998 |
+
6
|
| 999 |
+
�
|
| 1000 |
+
,
|
| 1001 |
+
�gi
|
| 1002 |
+
2 , g + 2ri
|
| 1003 |
+
�
|
| 1004 |
+
=
|
| 1005 |
+
� 1
|
| 1006 |
+
24, 1
|
| 1007 |
+
4
|
| 1008 |
+
�
|
| 1009 |
+
,
|
| 1010 |
+
�gi+1
|
| 1011 |
+
2 , g + 2ri
|
| 1012 |
+
�
|
| 1013 |
+
=
|
| 1014 |
+
�1
|
| 1015 |
+
6, 1
|
| 1016 |
+
4
|
| 1017 |
+
�
|
| 1018 |
+
,
|
| 1019 |
+
�gi + gi+1
|
| 1020 |
+
2
|
| 1021 |
+
+ ri, 0
|
| 1022 |
+
�
|
| 1023 |
+
=
|
| 1024 |
+
� 7
|
| 1025 |
+
24, 0
|
| 1026 |
+
�
|
| 1027 |
+
exactly as shown in the third picture of Fig. 5. ■
|
| 1028 |
+
5 Higher density functions ψk
|
| 1029 |
+
This section proves Theorem 5.2 describing the k-
|
| 1030 |
+
th density function ψk[S](t) for any k ≥ 2 and a
|
| 1031 |
+
periodic sequence S of disjoint intervals.
|
| 1032 |
+
To prepare the proof of Theorem 5.2, Exam-
|
| 1033 |
+
ple 5.1 computes ψ2[S] for S from Example 3.1.
|
| 1034 |
+
Example 5.1 (ψ2 for S =
|
| 1035 |
+
�
|
| 1036 |
+
0, 1
|
| 1037 |
+
3, 1
|
| 1038 |
+
2
|
| 1039 |
+
�
|
| 1040 |
+
+ Z). The
|
| 1041 |
+
density ψ2(t) can be found as the sum of the trape-
|
| 1042 |
+
zoid functions ηGB, ηBR, ηRG, each measuring the
|
| 1043 |
+
length of a double intersection, see Fig. 3.
|
| 1044 |
+
For the green interval [1
|
| 1045 |
+
3 −t, 1
|
| 1046 |
+
3 +t] and the blue
|
| 1047 |
+
interval [ 5
|
| 1048 |
+
12 − t, 7
|
| 1049 |
+
12 + t], the graph of the function
|
| 1050 |
+
ηGB(t) is piecewise linear and starts at the point
|
| 1051 |
+
( 1
|
| 1052 |
+
24, 0) because these intervals touch at t = 1
|
| 1053 |
+
24.
|
| 1054 |
+
The green-blue intersection [ 5
|
| 1055 |
+
12 −t, 1
|
| 1056 |
+
3 +t] grows
|
| 1057 |
+
until t = 1
|
| 1058 |
+
6, when the resulting interval [1
|
| 1059 |
+
4, 1
|
| 1060 |
+
2]
|
| 1061 |
+
touches the red interval on the left. At the same
|
| 1062 |
+
time, the graph of ηGB(t) is linearly growing (with
|
| 1063 |
+
gradient 2) to the corner (1
|
| 1064 |
+
6, 1
|
| 1065 |
+
4), see Fig, 7.
|
| 1066 |
+
For t ∈ [1
|
| 1067 |
+
6, 7
|
| 1068 |
+
24], the green-blue intersection
|
| 1069 |
+
interval becomes shorter on the left, but grows at
|
| 1070 |
+
the same rate on the right until t = 7
|
| 1071 |
+
24 when [1
|
| 1072 |
+
8, 5
|
| 1073 |
+
8]
|
| 1074 |
+
touches the red interval [5
|
| 1075 |
+
8, 1] on the right, see
|
| 1076 |
+
the 5th picture in Fig. 3. So the graph of ηGB(t)
|
| 1077 |
+
remains constant up to the point ( 7
|
| 1078 |
+
24, 1
|
| 1079 |
+
4).
|
| 1080 |
+
For t ∈ [ 7
|
| 1081 |
+
24, 5
|
| 1082 |
+
12] the green-blue intersection
|
| 1083 |
+
interval is shortening from both sides. So the
|
| 1084 |
+
graph of ηGB(t) linearly decreases (with gradient
|
| 1085 |
+
−2) and returns to the t-axis at the corner ( 5
|
| 1086 |
+
12, 0),
|
| 1087 |
+
then remains constant ηGB(t) = 0 for t ≥ 5
|
| 1088 |
+
12.
|
| 1089 |
+
Fig. 7 shows all trapezoid functions for double
|
| 1090 |
+
intersections and ψ2 = ηGB + ηBR + ηRG.
|
| 1091 |
+
■
|
| 1092 |
+
Theorem 5.2 (description of ψk for k ≥ 2). Let
|
| 1093 |
+
a periodic sequence S = {p1, . . . , pm} + Z consist
|
| 1094 |
+
of disjoint intervals with centers 0 ≤ p1 < · · · <
|
| 1095 |
+
pm < 1 and radii r1, . . . , rm ≥ 0, respectively.
|
| 1096 |
+
Consider the gaps gi = (pi − ri) − (pi−1 + ri−1)
|
| 1097 |
+
between the successive intervals of S, where i =
|
| 1098 |
+
1, . . . , m and p0 = pm − 1, r0 = rm.
|
| 1099 |
+
For k ≥ 2, the density function ψk(t) equals
|
| 1100 |
+
the sum of m trapezoid functions ηk,i(t), i =
|
| 1101 |
+
1, . . . , m, each having the following corner points:
|
| 1102 |
+
�s
|
| 1103 |
+
2, 0
|
| 1104 |
+
�
|
| 1105 |
+
,
|
| 1106 |
+
�g + s
|
| 1107 |
+
2
|
| 1108 |
+
, g
|
| 1109 |
+
�
|
| 1110 |
+
,
|
| 1111 |
+
�s + g′
|
| 1112 |
+
2
|
| 1113 |
+
, g
|
| 1114 |
+
�
|
| 1115 |
+
,
|
| 1116 |
+
�g + s + g′
|
| 1117 |
+
2
|
| 1118 |
+
, 0
|
| 1119 |
+
�
|
| 1120 |
+
,
|
| 1121 |
+
|
| 1122 |
+
+K
|
| 1123 |
+
P
|
| 1124 |
+
i+k-1
|
| 1125 |
+
1+1
|
| 1126 |
+
distance
|
| 1127 |
+
distance
|
| 1128 |
+
CSpringer Nature 2021 LATEX template
|
| 1129 |
+
10
|
| 1130 |
+
Density functions of periodic sequences
|
| 1131 |
+
Fig. 7 The trapezoid functions ηGB, ηBR, ηRG and the
|
| 1132 |
+
2nd density function ψ2(t) for the 1-period sequence S
|
| 1133 |
+
whose points 0, 1
|
| 1134 |
+
3 , 1
|
| 1135 |
+
2 have radii 1
|
| 1136 |
+
12 , 0, 1
|
| 1137 |
+
12 , see Example 5.1.
|
| 1138 |
+
where g, g′ are the minimum and maximum values
|
| 1139 |
+
in the pair {gi + 2ri, gi+k + 2ri+k−1}, and s =
|
| 1140 |
+
i+k−1
|
| 1141 |
+
�
|
| 1142 |
+
j=i+1
|
| 1143 |
+
gj + 2
|
| 1144 |
+
i+k−2
|
| 1145 |
+
�
|
| 1146 |
+
j=i+1
|
| 1147 |
+
rj, so s = gi+1 for k = 2.
|
| 1148 |
+
Hence ψk(t) is determined by the unordered
|
| 1149 |
+
set of the ordered tuples (g, s, g′), i = 1, . . . , m. ■
|
| 1150 |
+
Proof The k-th density function ψk(t) measures the
|
| 1151 |
+
total fractional length of k-fold intersections among m
|
| 1152 |
+
intervals Li(t) = [pi − ri − t, pi + ri + t], i = 1, . . . , m.
|
| 1153 |
+
Now we visualize all such intervals Li(t) in the line R
|
| 1154 |
+
without mapping them modulo 1 to the unit cell [0, 1].
|
| 1155 |
+
Since all radii ri ≥ 0, only k successive inter-
|
| 1156 |
+
vals can contribute to k-fold intersections. So a k-fold
|
| 1157 |
+
intersection of growing intervals emerges only when
|
| 1158 |
+
two intervals Li(t) and Li+k−1(t) overlap because
|
| 1159 |
+
their intersection should be also covered by all the
|
| 1160 |
+
intermediate intervals Li(t), Li+1(t), . . . , Li+k−1(t).
|
| 1161 |
+
Then the density ψk(t) equals the sum of the m
|
| 1162 |
+
trapezoid functions ηk,i, i = 1, . . . , m, each equal to
|
| 1163 |
+
the length of the k-fold intersection ∩i+k−1
|
| 1164 |
+
j=i
|
| 1165 |
+
Lj(t) not
|
| 1166 |
+
covered by other intervals. Then ηk,i(t) remains 0 until
|
| 1167 |
+
the first critical moment t when 2t equals the distance
|
| 1168 |
+
between the points pi + ri and pi+k−1 − ri+k−1 in R,
|
| 1169 |
+
see Fig. 6, so 2t =
|
| 1170 |
+
i+k−1
|
| 1171 |
+
�
|
| 1172 |
+
j=i+1
|
| 1173 |
+
gj + 2
|
| 1174 |
+
i+k−2
|
| 1175 |
+
�
|
| 1176 |
+
j=i+1
|
| 1177 |
+
rj = s. Hence
|
| 1178 |
+
t = s
|
| 1179 |
+
2 and ( s
|
| 1180 |
+
2, 0) is the first corner point of ηk,i(t).
|
| 1181 |
+
At t = s
|
| 1182 |
+
2, the interval of the k-fold intersection
|
| 1183 |
+
∩i+k−1
|
| 1184 |
+
j=i
|
| 1185 |
+
Lj(t) starts expanding on both sides. Hence
|
| 1186 |
+
ηk,i(t) starts increasing (with gradient 2) until the
|
| 1187 |
+
k-fold intersection touches one of the neighboring
|
| 1188 |
+
intervals Li−1(t) or Li+k(t) on the left or on the right.
|
| 1189 |
+
The left interval Li−1(t) touches the k-fold inter-
|
| 1190 |
+
section ∩i+k−1
|
| 1191 |
+
j=i
|
| 1192 |
+
Lj(t) when 2t equals the distance from
|
| 1193 |
+
pi−1 + ri−1 (the right endpoint of Li−1) to pi+k−1 −
|
| 1194 |
+
ri+k−1 (the left endpoint of Li+k−1), see Fig. 6, so
|
| 1195 |
+
2t =
|
| 1196 |
+
i+k−1
|
| 1197 |
+
�
|
| 1198 |
+
j=i
|
| 1199 |
+
gj + 2
|
| 1200 |
+
i+k−2
|
| 1201 |
+
�
|
| 1202 |
+
j=i
|
| 1203 |
+
rj = gi + 2ri + s.
|
| 1204 |
+
The right interval Li+k−1(t′) touches the k-fold
|
| 1205 |
+
intersection ∩i+k−1
|
| 1206 |
+
j=i
|
| 1207 |
+
Lj(t′) when 2t′ equals the distance
|
| 1208 |
+
from pi + ri (the right endpoint of Li) to pi+k − ri+k
|
| 1209 |
+
(the left endpoint of Li+k), see Fig. 6, so
|
| 1210 |
+
2t′ =
|
| 1211 |
+
i+k
|
| 1212 |
+
�
|
| 1213 |
+
j=i+1
|
| 1214 |
+
gj + 2
|
| 1215 |
+
i+k−1
|
| 1216 |
+
�
|
| 1217 |
+
j=i+1
|
| 1218 |
+
rj = s + gi+k + 2ri+k−1.
|
| 1219 |
+
If (say) gi + 2ri = g < g′ = gi+k + 2ri+k−1, the
|
| 1220 |
+
k-fold intersection ∩i+k−1
|
| 1221 |
+
j=i
|
| 1222 |
+
Lj(t) first touches Li��1 at
|
| 1223 |
+
the earlier moment t before reaching Li+k(t′) at the
|
| 1224 |
+
later moment t′. At the earlier moment, ηk,i(t) equals
|
| 1225 |
+
2(t − s
|
| 1226 |
+
2) = gi + 2ri = g and has the corner (g + s
|
| 1227 |
+
2
|
| 1228 |
+
, g).
|
| 1229 |
+
After that, the k-fold intersection is shrinking on
|
| 1230 |
+
the left and is expanding at the same rate on the right.
|
| 1231 |
+
So the function ηk,i(t) = g remains constant until the
|
| 1232 |
+
k-fold intersection touches the right interval Li+k(t′).
|
| 1233 |
+
|
| 1234 |
+
GB
|
| 1235 |
+
BR
|
| 1236 |
+
RGSpringer Nature 2021 LATEX template
|
| 1237 |
+
Density functions of periodic sequences
|
| 1238 |
+
11
|
| 1239 |
+
At this later moment t′ = s + gi+k
|
| 1240 |
+
2
|
| 1241 |
+
+ ri+k−1 = g′,
|
| 1242 |
+
ηk,i(t′) still equals g and has the corner (s + g′
|
| 1243 |
+
2
|
| 1244 |
+
, g).
|
| 1245 |
+
If gi + 2ri = g′ > g = gi+k + 2ri+k−1, the grow-
|
| 1246 |
+
ing intervals Li−1(t) and Li+k−1(t) touch the k-fold
|
| 1247 |
+
intersection ∩i+k−1
|
| 1248 |
+
j=i
|
| 1249 |
+
Lj(t) in the opposite order. How-
|
| 1250 |
+
ever, the above arguments lead to the same corners
|
| 1251 |
+
(g + s
|
| 1252 |
+
2
|
| 1253 |
+
, g) and (s + g′
|
| 1254 |
+
2
|
| 1255 |
+
, g) of ηk,i(t). If g = g′, the two
|
| 1256 |
+
corners collapse to one corner in the graph of ηk,i(t).
|
| 1257 |
+
The k-fold intersection ∩i+k−1
|
| 1258 |
+
j=i
|
| 1259 |
+
Lj(t) becomes fully
|
| 1260 |
+
covered when the intervals Li−1(t), Li+k(t). At this
|
| 1261 |
+
moment, 2t equals the distance from pi−1 + ri−1 (the
|
| 1262 |
+
right endpoint of Li−1) to pi+k − ri+k (the left end-
|
| 1263 |
+
point of Li+k), see Fig. 6, so 2t =
|
| 1264 |
+
i+k
|
| 1265 |
+
�
|
| 1266 |
+
j=i
|
| 1267 |
+
gj +2
|
| 1268 |
+
i+k−1
|
| 1269 |
+
�
|
| 1270 |
+
j=i
|
| 1271 |
+
rj =
|
| 1272 |
+
gi + 2ri + s + gi+k + 2ri+k−1 = g + s + g′. The graph
|
| 1273 |
+
of ηk,i(t) has the final corner
|
| 1274 |
+
�g + s + g′
|
| 1275 |
+
2
|
| 1276 |
+
, 0
|
| 1277 |
+
�
|
| 1278 |
+
.
|
| 1279 |
+
□
|
| 1280 |
+
Example 5.3 applies Theorem 5.2 to get ψ2
|
| 1281 |
+
found for the periodic sequence S in Example 3.1.
|
| 1282 |
+
Example 5.3 (using Theorem 5.2 for ψ2). The
|
| 1283 |
+
sequence S =
|
| 1284 |
+
�
|
| 1285 |
+
0, 1
|
| 1286 |
+
3, 1
|
| 1287 |
+
2
|
| 1288 |
+
�
|
| 1289 |
+
+ Z in Example 4.1 with
|
| 1290 |
+
points p1 = 0, p2 = 1
|
| 1291 |
+
3, p3 = 1
|
| 1292 |
+
2 of radii r1 = 1
|
| 1293 |
+
12,
|
| 1294 |
+
r2 = 0, r3 = 1
|
| 1295 |
+
12, respectively, has the initial gaps
|
| 1296 |
+
g1 = 1
|
| 1297 |
+
3, g2 = 1
|
| 1298 |
+
4, g3 = 1
|
| 1299 |
+
12, see Example 3.3.
|
| 1300 |
+
In Theorem 5.2, the 2nd density function
|
| 1301 |
+
ψ2[S](t) is expressed as a sum of the trapezoid
|
| 1302 |
+
functions computed via their corners below.
|
| 1303 |
+
Case (GB). For the function ηGB measuring the
|
| 1304 |
+
double intersections of the green and blue intervals
|
| 1305 |
+
centered at p2 = pi and p3 = pi+k−1, we set k = 2
|
| 1306 |
+
and i = 2. Then we have the radii ri = 0 and
|
| 1307 |
+
ri+1 = 1
|
| 1308 |
+
12, the gaps gi = 1
|
| 1309 |
+
4, gi+1 = 1
|
| 1310 |
+
12, gi+2 = 1
|
| 1311 |
+
3,
|
| 1312 |
+
and the sum s = gi+1 = 1
|
| 1313 |
+
12. The pair
|
| 1314 |
+
{gi + 2ri, gi+2 + 2ri+1} =
|
| 1315 |
+
�1
|
| 1316 |
+
4 + 0, 1
|
| 1317 |
+
3 + 2
|
| 1318 |
+
12
|
| 1319 |
+
�
|
| 1320 |
+
has the minimum value g = 1
|
| 1321 |
+
4 and maximum value
|
| 1322 |
+
g′ = 1
|
| 1323 |
+
2. Then η2,2[S](t) = ηGB has the following
|
| 1324 |
+
corners as expected in the top picture of Fig. 7:
|
| 1325 |
+
�s
|
| 1326 |
+
2, 0
|
| 1327 |
+
�
|
| 1328 |
+
=
|
| 1329 |
+
� 1
|
| 1330 |
+
24, 0
|
| 1331 |
+
�
|
| 1332 |
+
,
|
| 1333 |
+
�g + s
|
| 1334 |
+
2
|
| 1335 |
+
, g
|
| 1336 |
+
�
|
| 1337 |
+
=
|
| 1338 |
+
�1
|
| 1339 |
+
2
|
| 1340 |
+
�1
|
| 1341 |
+
4 + 1
|
| 1342 |
+
12
|
| 1343 |
+
�
|
| 1344 |
+
, 1
|
| 1345 |
+
4
|
| 1346 |
+
�
|
| 1347 |
+
=
|
| 1348 |
+
�1
|
| 1349 |
+
6, 1
|
| 1350 |
+
4
|
| 1351 |
+
�
|
| 1352 |
+
,
|
| 1353 |
+
�s + g′
|
| 1354 |
+
2
|
| 1355 |
+
, g
|
| 1356 |
+
�
|
| 1357 |
+
=
|
| 1358 |
+
�1
|
| 1359 |
+
2
|
| 1360 |
+
� 1
|
| 1361 |
+
12 + 1
|
| 1362 |
+
2
|
| 1363 |
+
�
|
| 1364 |
+
, 1
|
| 1365 |
+
4
|
| 1366 |
+
�
|
| 1367 |
+
=
|
| 1368 |
+
� 7
|
| 1369 |
+
24, 1
|
| 1370 |
+
4
|
| 1371 |
+
�
|
| 1372 |
+
,
|
| 1373 |
+
�g + s + g′
|
| 1374 |
+
2
|
| 1375 |
+
, 0
|
| 1376 |
+
�
|
| 1377 |
+
=
|
| 1378 |
+
�1
|
| 1379 |
+
2(1
|
| 1380 |
+
4 + 1
|
| 1381 |
+
12 + 1
|
| 1382 |
+
2), 0
|
| 1383 |
+
�
|
| 1384 |
+
=
|
| 1385 |
+
� 5
|
| 1386 |
+
12, 0
|
| 1387 |
+
�
|
| 1388 |
+
.
|
| 1389 |
+
Case (BR). For the trapezoid function ηBR mea-
|
| 1390 |
+
suring the double intersections of the blue and red
|
| 1391 |
+
intervals centered at p3 = pi and p1 = pi+k−1,
|
| 1392 |
+
we set k = 2 and i = 3. Then we have the radii
|
| 1393 |
+
ri =
|
| 1394 |
+
1
|
| 1395 |
+
12 = ri+1, the gaps gi =
|
| 1396 |
+
1
|
| 1397 |
+
12, gi+1 = 1
|
| 1398 |
+
3,
|
| 1399 |
+
gi+2 = 1
|
| 1400 |
+
4, and s = gi+1 = 1
|
| 1401 |
+
3. The pair
|
| 1402 |
+
{gi + 2ri, gi+2 + 2ri+1} =
|
| 1403 |
+
� 1
|
| 1404 |
+
12 + 2
|
| 1405 |
+
12, 1
|
| 1406 |
+
4 + 2
|
| 1407 |
+
12
|
| 1408 |
+
�
|
| 1409 |
+
has the minimum g = 1
|
| 1410 |
+
4 and maximum g′ = 5
|
| 1411 |
+
12.
|
| 1412 |
+
Then η2,3[S](t) = ηBR has the following corners
|
| 1413 |
+
as expected in the second picture of Fig. 7:
|
| 1414 |
+
�s
|
| 1415 |
+
2, 0
|
| 1416 |
+
�
|
| 1417 |
+
=
|
| 1418 |
+
�1
|
| 1419 |
+
6, 0
|
| 1420 |
+
�
|
| 1421 |
+
,
|
| 1422 |
+
�g + s
|
| 1423 |
+
2
|
| 1424 |
+
, g
|
| 1425 |
+
�
|
| 1426 |
+
=
|
| 1427 |
+
�1
|
| 1428 |
+
2
|
| 1429 |
+
�1
|
| 1430 |
+
4 + 1
|
| 1431 |
+
3
|
| 1432 |
+
�
|
| 1433 |
+
, 1
|
| 1434 |
+
4
|
| 1435 |
+
�
|
| 1436 |
+
=
|
| 1437 |
+
� 7
|
| 1438 |
+
24, 1
|
| 1439 |
+
4
|
| 1440 |
+
�
|
| 1441 |
+
,
|
| 1442 |
+
�s + g′
|
| 1443 |
+
2
|
| 1444 |
+
, g
|
| 1445 |
+
�
|
| 1446 |
+
=
|
| 1447 |
+
�1
|
| 1448 |
+
2
|
| 1449 |
+
�1
|
| 1450 |
+
3 + 5
|
| 1451 |
+
12
|
| 1452 |
+
�
|
| 1453 |
+
, 1
|
| 1454 |
+
4
|
| 1455 |
+
�
|
| 1456 |
+
=
|
| 1457 |
+
�3
|
| 1458 |
+
8, 1
|
| 1459 |
+
4
|
| 1460 |
+
�
|
| 1461 |
+
,
|
| 1462 |
+
�g + s + g′
|
| 1463 |
+
2
|
| 1464 |
+
, 0
|
| 1465 |
+
�
|
| 1466 |
+
=
|
| 1467 |
+
�1
|
| 1468 |
+
2(1
|
| 1469 |
+
4 + 1
|
| 1470 |
+
3 + 5
|
| 1471 |
+
12), 0
|
| 1472 |
+
�
|
| 1473 |
+
=
|
| 1474 |
+
�1
|
| 1475 |
+
2, 0
|
| 1476 |
+
�
|
| 1477 |
+
.
|
| 1478 |
+
Case (RG). For the trapezoid function ηRG mea-
|
| 1479 |
+
suring the double intersections of the red and
|
| 1480 |
+
green intervals centered at p1 = pi and p2 =
|
| 1481 |
+
pi+k−1, we set k = 2 and i = 1. Then we have
|
| 1482 |
+
the radii ri = 1
|
| 1483 |
+
12 and ri+1 = 0, the gaps gi = 1
|
| 1484 |
+
3,
|
| 1485 |
+
|
| 1486 |
+
Springer Nature 2021 LATEX template
|
| 1487 |
+
12
|
| 1488 |
+
Density functions of periodic sequences
|
| 1489 |
+
gi+1 = 1
|
| 1490 |
+
4, gi+2 = 1
|
| 1491 |
+
12, and s = gi+1 = 1
|
| 1492 |
+
4. The pair
|
| 1493 |
+
{gi + 2ri, gi+2 + 2ri+1} =
|
| 1494 |
+
�1
|
| 1495 |
+
3 + 2
|
| 1496 |
+
12, 1
|
| 1497 |
+
12 + 0
|
| 1498 |
+
�
|
| 1499 |
+
has the minimum g = 1
|
| 1500 |
+
12 and maximum g′ = 1
|
| 1501 |
+
2.
|
| 1502 |
+
Then η2,1[S](t) = ηRG has the following corners:
|
| 1503 |
+
�s
|
| 1504 |
+
2, 0
|
| 1505 |
+
�
|
| 1506 |
+
=
|
| 1507 |
+
�1
|
| 1508 |
+
8, 0
|
| 1509 |
+
�
|
| 1510 |
+
,
|
| 1511 |
+
�g + s
|
| 1512 |
+
2
|
| 1513 |
+
, g
|
| 1514 |
+
�
|
| 1515 |
+
=
|
| 1516 |
+
�1
|
| 1517 |
+
2
|
| 1518 |
+
� 1
|
| 1519 |
+
12 + 1
|
| 1520 |
+
4
|
| 1521 |
+
�
|
| 1522 |
+
, 1
|
| 1523 |
+
12
|
| 1524 |
+
�
|
| 1525 |
+
=
|
| 1526 |
+
�1
|
| 1527 |
+
6, 1
|
| 1528 |
+
12
|
| 1529 |
+
�
|
| 1530 |
+
,
|
| 1531 |
+
�s + g′
|
| 1532 |
+
2
|
| 1533 |
+
, g
|
| 1534 |
+
�
|
| 1535 |
+
=
|
| 1536 |
+
�1
|
| 1537 |
+
2
|
| 1538 |
+
�1
|
| 1539 |
+
4 + 1
|
| 1540 |
+
2
|
| 1541 |
+
�
|
| 1542 |
+
, 1
|
| 1543 |
+
12
|
| 1544 |
+
�
|
| 1545 |
+
=
|
| 1546 |
+
�3
|
| 1547 |
+
8, 1
|
| 1548 |
+
12
|
| 1549 |
+
�
|
| 1550 |
+
,
|
| 1551 |
+
�g + s + g′
|
| 1552 |
+
2
|
| 1553 |
+
, 0
|
| 1554 |
+
�
|
| 1555 |
+
=
|
| 1556 |
+
�1
|
| 1557 |
+
2( 1
|
| 1558 |
+
12 + 1
|
| 1559 |
+
4 + 1
|
| 1560 |
+
2), 0
|
| 1561 |
+
�
|
| 1562 |
+
=
|
| 1563 |
+
� 5
|
| 1564 |
+
12, 0
|
| 1565 |
+
�
|
| 1566 |
+
.
|
| 1567 |
+
as expected in the third picture of Fig. 7.
|
| 1568 |
+
■
|
| 1569 |
+
6 Properties of new densities
|
| 1570 |
+
This section proves the periodicity of the sequence
|
| 1571 |
+
ψk with respect to the index k ≥ 0 in Theorem 6.2,
|
| 1572 |
+
which was a bit unexpected from original Defini-
|
| 1573 |
+
tion 1.2. We start with the simpler example for
|
| 1574 |
+
the familiar 3-point sequence in Fig. 3.
|
| 1575 |
+
Example 6.1 (periodicity of ψk in the index k).
|
| 1576 |
+
Let the periodic sequence S =
|
| 1577 |
+
�
|
| 1578 |
+
0, 1
|
| 1579 |
+
3, 1
|
| 1580 |
+
2
|
| 1581 |
+
�
|
| 1582 |
+
+Z have
|
| 1583 |
+
three points p1 = 0, p2 = 1
|
| 1584 |
+
3, p3 = 1
|
| 1585 |
+
2 of radii
|
| 1586 |
+
r1 = 1
|
| 1587 |
+
12, r2 = 0, r3 = 1
|
| 1588 |
+
12, respectively. The ini-
|
| 1589 |
+
tial intervals L1(0) = [− 1
|
| 1590 |
+
12, 1
|
| 1591 |
+
12], L2(0) = [ 1
|
| 1592 |
+
3, 1
|
| 1593 |
+
3],
|
| 1594 |
+
L3(0) = [ 5
|
| 1595 |
+
12, 7
|
| 1596 |
+
12] have the 0-fold intersection mea-
|
| 1597 |
+
sured by ψ0(0) = 2
|
| 1598 |
+
3 and the 1-fold intersection
|
| 1599 |
+
measured by ψ1(0) = 1
|
| 1600 |
+
3, see Fig. 4 and 5.
|
| 1601 |
+
By the time t = 1
|
| 1602 |
+
2 the initial intervals will grow
|
| 1603 |
+
to L1( 1
|
| 1604 |
+
2) = [− 7
|
| 1605 |
+
12, 7
|
| 1606 |
+
12], L2( 1
|
| 1607 |
+
2) = [− 1
|
| 1608 |
+
6, 5
|
| 1609 |
+
6], L3( 1
|
| 1610 |
+
2) =
|
| 1611 |
+
[− 1
|
| 1612 |
+
12, 13
|
| 1613 |
+
12]. The grown intervals at the radius t = 1
|
| 1614 |
+
2
|
| 1615 |
+
have the 3-fold intersection [− 1
|
| 1616 |
+
12, 7
|
| 1617 |
+
12] of the length
|
| 1618 |
+
ψ3( 1
|
| 1619 |
+
2) = 2
|
| 1620 |
+
3, which coincides with ψ0(0) = 2
|
| 1621 |
+
3.
|
| 1622 |
+
With the extra interval L4( 1
|
| 1623 |
+
2) = [ 5
|
| 1624 |
+
12, 19
|
| 1625 |
+
12] cen-
|
| 1626 |
+
tered at p4 = 1, the 4-fold intersection is L1 ∩
|
| 1627 |
+
L2 ∩ L3 ∩ L4 = [ 5
|
| 1628 |
+
12, 7
|
| 1629 |
+
12]. With the extra inter-
|
| 1630 |
+
val L5( 1
|
| 1631 |
+
2) = [ 5
|
| 1632 |
+
6, 11
|
| 1633 |
+
6 ] centered at p5 =
|
| 1634 |
+
4
|
| 1635 |
+
3, the
|
| 1636 |
+
4-fold intersection L2 ∩ L3 ∩ L4 ∩ L5 is the single
|
| 1637 |
+
point 5
|
| 1638 |
+
6. With the extra interval L6( 1
|
| 1639 |
+
2) = [ 11
|
| 1640 |
+
12, 13
|
| 1641 |
+
12]
|
| 1642 |
+
centered at p6 = 3
|
| 1643 |
+
2, the 4-fold intersection is
|
| 1644 |
+
L3∩L4∩L5∩L6 = [ 11
|
| 1645 |
+
12, 13
|
| 1646 |
+
12]. Hence the total length
|
| 1647 |
+
of the 4-fold intersection at t = 1
|
| 1648 |
+
2 is ψ4( 1
|
| 1649 |
+
2) = 1
|
| 1650 |
+
3,
|
| 1651 |
+
which coincides with ψ1(0) = 1
|
| 1652 |
+
3.
|
| 1653 |
+
For the larger t = 1, the six grown intervals
|
| 1654 |
+
L1(1) =
|
| 1655 |
+
�
|
| 1656 |
+
−13
|
| 1657 |
+
12, 13
|
| 1658 |
+
12
|
| 1659 |
+
�
|
| 1660 |
+
, L2(1) =
|
| 1661 |
+
�
|
| 1662 |
+
−2
|
| 1663 |
+
3, 4
|
| 1664 |
+
3
|
| 1665 |
+
�
|
| 1666 |
+
,
|
| 1667 |
+
L3(1) =
|
| 1668 |
+
�
|
| 1669 |
+
− 7
|
| 1670 |
+
12, 19
|
| 1671 |
+
12
|
| 1672 |
+
�
|
| 1673 |
+
, L4(1) =
|
| 1674 |
+
�
|
| 1675 |
+
− 1
|
| 1676 |
+
12, 25
|
| 1677 |
+
12
|
| 1678 |
+
�
|
| 1679 |
+
,
|
| 1680 |
+
L5(1) =
|
| 1681 |
+
�1
|
| 1682 |
+
3, 7
|
| 1683 |
+
3
|
| 1684 |
+
�
|
| 1685 |
+
,
|
| 1686 |
+
L6(1) =
|
| 1687 |
+
� 5
|
| 1688 |
+
12, 31
|
| 1689 |
+
12
|
| 1690 |
+
�
|
| 1691 |
+
have the 6-fold intersection
|
| 1692 |
+
� 5
|
| 1693 |
+
12, 13
|
| 1694 |
+
12
|
| 1695 |
+
�
|
| 1696 |
+
of length
|
| 1697 |
+
ψ6(1) = 2
|
| 1698 |
+
3 coinciding with ψ0(0) = ψ3( 1
|
| 1699 |
+
2) = 2
|
| 1700 |
+
3. ■
|
| 1701 |
+
Corollary 6.2 proves that the coincidences in
|
| 1702 |
+
Example 6.1 are not accidental. The periodicity of
|
| 1703 |
+
ψk with respect to k is illustrated by Fig. 8.
|
| 1704 |
+
Theorem 6.2 (periodicity of ψk in the index k).
|
| 1705 |
+
The density functions ψk[S] of a periodic sequence
|
| 1706 |
+
S = {p1, . . . , pm} + Z consist of disjoint intervals
|
| 1707 |
+
with centers 0 ≤ p1 < · · · < pm < 1 and radii
|
| 1708 |
+
r1, . . . , rm ≥ 0, respectively, satisfy the periodicity
|
| 1709 |
+
ψk+m(t + 1
|
| 1710 |
+
2) = ψk(t) for any k ≥ 0 and t ≥ 0.
|
| 1711 |
+
■
|
| 1712 |
+
Proof Since the initial intervals are disjoint, for k ≥ 0,
|
| 1713 |
+
any (k +m)-fold intersection involves k +m successive
|
| 1714 |
+
intervals Li(t), . . . , Li+k+m−1(t) centered around the
|
| 1715 |
+
points of S. Then we can find an interval [x, x + 1]
|
| 1716 |
+
covering exactly m of these initial intervals of S.
|
| 1717 |
+
By collapsing [x, x+1] to the point x, any (k+m)-
|
| 1718 |
+
fold intersection of k + m intervals grown by a radius
|
| 1719 |
+
r ≥ 1
|
| 1720 |
+
2 becomes a k-fold intersection of k intervals
|
| 1721 |
+
|
| 1722 |
+
Springer Nature 2021 LATEX template
|
| 1723 |
+
Density functions of periodic sequences
|
| 1724 |
+
13
|
| 1725 |
+
Fig. 8 The densities ψk, k = 0, . . . , 9 for the 1-period sequence S whose points 0, 1
|
| 1726 |
+
3 , 1
|
| 1727 |
+
2 have radii 1
|
| 1728 |
+
12 , 0, 1
|
| 1729 |
+
12 , respectively. The
|
| 1730 |
+
densities ψ0, ψ1, ψ2 are described in Examples 3.1, 4.1, 5.1 and determine all other densities by periodicity in Theorem 6.2.
|
| 1731 |
+
grown by t = r− 1
|
| 1732 |
+
2. Both k-fold and (k+m)-fold inter-
|
| 1733 |
+
sections within any unit cell have the same fractional
|
| 1734 |
+
length, so ψk+m(t + 1
|
| 1735 |
+
2) = ψk(t) for any t ≥ 0.
|
| 1736 |
+
□
|
| 1737 |
+
The symmetry ψm−k( 1
|
| 1738 |
+
2 − t) = ψk(t) for k =
|
| 1739 |
+
0, . . . , [ m
|
| 1740 |
+
2 ], and t ∈ [0, 1
|
| 1741 |
+
2] from [3, Theorem 8]
|
| 1742 |
+
no longer holds for points with different radii.
|
| 1743 |
+
For example, ψ1(t) ̸= ψ2( 1
|
| 1744 |
+
2 − t) for the periodic
|
| 1745 |
+
sequence S =
|
| 1746 |
+
�
|
| 1747 |
+
0, 1
|
| 1748 |
+
3, 1
|
| 1749 |
+
2
|
| 1750 |
+
�
|
| 1751 |
+
+ Z, see Fig. 5, 7. If
|
| 1752 |
+
all points have the same radius r, [3, Theorem 8]
|
| 1753 |
+
implies the symmetry after replacing t by t + 2r.
|
| 1754 |
+
The main results of [3] implied that all den-
|
| 1755 |
+
sity functions cannot distinguish the non-isometric
|
| 1756 |
+
sequences S15 = {0, 1, 3, 4, 5, 7, 9, 10, 12} + 15Z
|
| 1757 |
+
and Q15 = {0, 1, 3, 4, 6, 8, 9, 12, 14}+15Z of points
|
| 1758 |
+
with zero radii. Example 6.3 shows that the den-
|
| 1759 |
+
sities for sequences with non-zero radii are strictly
|
| 1760 |
+
stronger and distinguish the sequences S15 ̸∼= Q15.
|
| 1761 |
+
Example 6.3 (ψk for S15, Q15 with neighbor
|
| 1762 |
+
radii). For any point p in a periodic sequence S ⊂
|
| 1763 |
+
R, define its neighbor radius as the half-distance
|
| 1764 |
+
to a closest neighbor of p within the sequence S.
|
| 1765 |
+
This choice of radii respects the isometry in the
|
| 1766 |
+
sense that periodic sequences S, Q with zero-sized
|
| 1767 |
+
radii are isometric if and only if S, Q with neighbor
|
| 1768 |
+
radii are isometric. Fig. 9 shows that the densi-
|
| 1769 |
+
ties ψk for k ≥ 2 distinguish the non-isometric
|
| 1770 |
+
sequences S15 and Q15 scaled down by factor 15
|
| 1771 |
+
to the unit cell [0, 1], see Example 2.1.
|
| 1772 |
+
■
|
| 1773 |
+
Corollary
|
| 1774 |
+
6.4
|
| 1775 |
+
(computation
|
| 1776 |
+
of
|
| 1777 |
+
ψk(t)). Let
|
| 1778 |
+
S, Q ⊂ R be periodic sequences with at most m
|
| 1779 |
+
motif points. For k ≥ 1, one can draw the graph
|
| 1780 |
+
of the k-th density function ψk[S] in time O(m2).
|
| 1781 |
+
One can check in time O(m3) if Ψ[S] = Ψ[Q].
|
| 1782 |
+
■
|
| 1783 |
+
Proof To draw the graph of ψk[S] or evaluate the k-
|
| 1784 |
+
th density function ψk[S](t) at any radius t, we first
|
| 1785 |
+
use the periodicity from Theorem 6.2 to reduce k to
|
| 1786 |
+
the range 0, 1, . . . , m. In time O(m log m) we put the
|
| 1787 |
+
points from a unit cell U (scaled to [0, 1] for conve-
|
| 1788 |
+
nience) in the increasing (cyclic) order p1, . . . , pm. In
|
| 1789 |
+
time O(m) we compute the gaps gi = (pi−ri)−(pi−1+
|
| 1790 |
+
ri−1) between successive intervals.
|
| 1791 |
+
For k = 0, we put the gaps in the increasing order
|
| 1792 |
+
g[1] ≤ · · · ≤ g[m] in time O(m log m). By Theorem 3.2
|
| 1793 |
+
in time O(m2), we write down the O(m) corner points
|
| 1794 |
+
whose horizontal coordinates are the critical radii
|
| 1795 |
+
where ψ0(t) can change its gradient.
|
| 1796 |
+
We evaluate ψ0 at every critical radius t by sum-
|
| 1797 |
+
ming up the values of m trapezoid functions at t, which
|
| 1798 |
+
needs O(m2) time. It remains to plot the points at all
|
| 1799 |
+
|
| 1800 |
+
0.8
|
| 1801 |
+
psi 0
|
| 1802 |
+
0.6
|
| 1803 |
+
psi_1
|
| 1804 |
+
psi_2
|
| 1805 |
+
psi_3
|
| 1806 |
+
psi_4
|
| 1807 |
+
s
|
| 1808 |
+
0.4 -
|
| 1809 |
+
p
|
| 1810 |
+
psi_5
|
| 1811 |
+
psi_6
|
| 1812 |
+
psi_7
|
| 1813 |
+
0.2
|
| 1814 |
+
psi_8
|
| 1815 |
+
psi_9
|
| 1816 |
+
0.0
|
| 1817 |
+
0.0
|
| 1818 |
+
0.5
|
| 1819 |
+
1.0
|
| 1820 |
+
1.5
|
| 1821 |
+
TSpringer Nature 2021 LATEX template
|
| 1822 |
+
14
|
| 1823 |
+
Density functions of periodic sequences
|
| 1824 |
+
Fig. 9 The densities ψk, k = 0, . . . , 10, distinguish (already for k ≥ 2) the sequences (scaled down by period 15) S15 =
|
| 1825 |
+
{0, 1, 3, 4, 5, 7, 9, 10, 12} + 15Z (top) and Q15 = {0, 1, 3, 4, 6, 8, 9, 12, 14} + 15Z (bottom), where the radius ri of any point
|
| 1826 |
+
is the half-distance to its closest neighbor. These sequences with zero radii have identical ψk for all k, see [3, Example 10].
|
| 1827 |
+
O(m) critical radii t and connect the successive points
|
| 1828 |
+
by straight lines, so the total time is O(m2).
|
| 1829 |
+
For any larger fixed index k = 1, . . . , m, in time
|
| 1830 |
+
O(m2) we write down all O(m) corner points from
|
| 1831 |
+
Theorems 4.2 and 5.2, which leads to the graph of
|
| 1832 |
+
ψk(t) similarly to the above argument for k = 0.
|
| 1833 |
+
To decide if the infinite sequences of density func-
|
| 1834 |
+
tions coincide: Ψ[S] = Ψ[Q], by Theorem 6.2 it suffices
|
| 1835 |
+
to check only if O(m) density functions coincide:
|
| 1836 |
+
ψk[S](t) = ψk[Q](t) for k = 0, 1, . . . , [ m
|
| 1837 |
+
2 ].
|
| 1838 |
+
To check if two piecewise linear functions coincide,
|
| 1839 |
+
it remains to compare their values at all O(m) critical
|
| 1840 |
+
radii t from the corner points in Theorems 3.2, 4.2, 5.2.
|
| 1841 |
+
Since these values were found in time O(m2) above,
|
| 1842 |
+
the total time for k = 0, 1, . . . , [ m
|
| 1843 |
+
2 ] is O(m3).
|
| 1844 |
+
□
|
| 1845 |
+
|
| 1846 |
+
0.75
|
| 1847 |
+
psi_0
|
| 1848 |
+
psi_1
|
| 1849 |
+
psi_2
|
| 1850 |
+
psi_3
|
| 1851 |
+
0.50
|
| 1852 |
+
K
|
| 1853 |
+
psi_4
|
| 1854 |
+
psi
|
| 1855 |
+
psi_5
|
| 1856 |
+
psi_6
|
| 1857 |
+
psi_7
|
| 1858 |
+
0.25
|
| 1859 |
+
psi_8
|
| 1860 |
+
psi_ 9
|
| 1861 |
+
psi_10
|
| 1862 |
+
0.00
|
| 1863 |
+
0.0
|
| 1864 |
+
0.2
|
| 1865 |
+
0.4
|
| 1866 |
+
0.6
|
| 1867 |
+
T0.75
|
| 1868 |
+
psi_ 0
|
| 1869 |
+
psi_1
|
| 1870 |
+
psi_2
|
| 1871 |
+
psi_3
|
| 1872 |
+
0.50
|
| 1873 |
+
K
|
| 1874 |
+
psi_4
|
| 1875 |
+
psi_5
|
| 1876 |
+
psi_6
|
| 1877 |
+
psi_7
|
| 1878 |
+
0.25
|
| 1879 |
+
psi_8
|
| 1880 |
+
psi_ 9
|
| 1881 |
+
psi_10
|
| 1882 |
+
0.00
|
| 1883 |
+
0.0
|
| 1884 |
+
0.2
|
| 1885 |
+
0.4
|
| 1886 |
+
0.6
|
| 1887 |
+
TSpringer Nature 2021 LATEX template
|
| 1888 |
+
Density functions of periodic sequences
|
| 1889 |
+
15
|
| 1890 |
+
All previous examples show densities with a
|
| 1891 |
+
single local maximum. However, the new R code
|
| 1892 |
+
[5] helped us discover the opposite examples.
|
| 1893 |
+
Fig. 10 For the periodic sequence S =
|
| 1894 |
+
�
|
| 1895 |
+
0, 1
|
| 1896 |
+
8 , 1
|
| 1897 |
+
4 , 3
|
| 1898 |
+
4
|
| 1899 |
+
�
|
| 1900 |
+
+ Z
|
| 1901 |
+
whose all points have radii 0, the 2nd density ψ2[S](t) has
|
| 1902 |
+
the local minimum at t = 1
|
| 1903 |
+
4 between two local maxima.
|
| 1904 |
+
Example 6.5 (densities with multiple maxima).
|
| 1905 |
+
Fig. 10 shows a simple 4-point sequence S whose
|
| 1906 |
+
2nd density ψ2[S] has two local maxima. Fig. 11
|
| 1907 |
+
and 12 show more complicated sequences whose
|
| 1908 |
+
density functions have more than two maxima. ■
|
| 1909 |
+
Fig. 11 For the sequence S =
|
| 1910 |
+
�
|
| 1911 |
+
0, 1
|
| 1912 |
+
81 , 1
|
| 1913 |
+
27 , 1
|
| 1914 |
+
9 , 1
|
| 1915 |
+
3
|
| 1916 |
+
�
|
| 1917 |
+
+Z whose
|
| 1918 |
+
all points have radii 0, ψ2[S] equal to the sum of the shown
|
| 1919 |
+
five trapezoid functions has three maxima.
|
| 1920 |
+
Fig. 12 For the sequence S =
|
| 1921 |
+
�
|
| 1922 |
+
0, 1
|
| 1923 |
+
64 , 1
|
| 1924 |
+
16 , 1
|
| 1925 |
+
8 , 1
|
| 1926 |
+
4 , 3
|
| 1927 |
+
4
|
| 1928 |
+
�
|
| 1929 |
+
+ Z
|
| 1930 |
+
whose all points have radii 0, ψ3[S] has 5 local maxima.
|
| 1931 |
+
7 Conclusions and future work
|
| 1932 |
+
In comparison with the past work [3], the key
|
| 1933 |
+
contributions of this paper are the following.
|
| 1934 |
+
• Definition 1.2 extends density functions ψk to
|
| 1935 |
+
any periodic sets of points with radii ri ≥ 0.
|
| 1936 |
+
• Theorems 3.2, 4.2, 5.2 explicitly describe all ψk
|
| 1937 |
+
for any periodic sequence S of points with radii.
|
| 1938 |
+
• The descriptions of ψk allowed us to justify the
|
| 1939 |
+
periodicity of ψk in Theorem 6.2 and a quadratic
|
| 1940 |
+
algorithm computing any ψk in Corollary 6.4.
|
| 1941 |
+
• The code [5] helped us distinguish S15 ̸∼= Q15 in
|
| 1942 |
+
Example 6.3 and find sequences whose densities
|
| 1943 |
+
have multiple local maxima in Example 6.5.
|
| 1944 |
+
Here are the open problems for future work.
|
| 1945 |
+
• Verify if density functions ψk[S](t) for small
|
| 1946 |
+
values of k distinguish all non-isometric periodic
|
| 1947 |
+
point sets S ⊂ Rn at least with radii 0.
|
| 1948 |
+
• Characterize the periodic sequences S ⊂ R
|
| 1949 |
+
whose all density functions ψk for k ≥ 1 have a
|
| 1950 |
+
unique local maximum, not as in Example 6.5.
|
| 1951 |
+
• Similar to Theorems 3.2, 4.2, 5.2, analytically
|
| 1952 |
+
describe the density function ψk[S] for periodic
|
| 1953 |
+
point sets S ⊂ Rn in higher dimensions n > 1.
|
| 1954 |
+
This research was supported by the grants of
|
| 1955 |
+
the UK Engineering Physical Sciences Research
|
| 1956 |
+
|
| 1957 |
+
1.00
|
| 1958 |
+
0.75
|
| 1959 |
+
psi_0
|
| 1960 |
+
K
|
| 1961 |
+
0.50
|
| 1962 |
+
psi_1
|
| 1963 |
+
Isd
|
| 1964 |
+
psi_2
|
| 1965 |
+
psi_3
|
| 1966 |
+
0.25
|
| 1967 |
+
0.00
|
| 1968 |
+
0.0
|
| 1969 |
+
0.1
|
| 1970 |
+
0.2
|
| 1971 |
+
0.3
|
| 1972 |
+
0.4
|
| 1973 |
+
0.5
|
| 1974 |
+
t0.12
|
| 1975 |
+
-
|
| 1976 |
+
0.08
|
| 1977 |
+
eta_2_1
|
| 1978 |
+
eta_2_2
|
| 1979 |
+
eta_2_3
|
| 1980 |
+
.2
|
| 1981 |
+
eta_2_4
|
| 1982 |
+
eta_2_5
|
| 1983 |
+
psi_2
|
| 1984 |
+
0.04 -
|
| 1985 |
+
0.00 -
|
| 1986 |
+
0.0
|
| 1987 |
+
0.1
|
| 1988 |
+
0.2
|
| 1989 |
+
0.3
|
| 1990 |
+
0.4
|
| 1991 |
+
0.5
|
| 1992 |
+
t0.09-
|
| 1993 |
+
eta_3_1
|
| 1994 |
+
eta_3_2
|
| 1995 |
+
3
|
| 1996 |
+
0.06
|
| 1997 |
+
eta_3_3
|
| 1998 |
+
eta
|
| 1999 |
+
eta_3_4
|
| 2000 |
+
3
|
| 2001 |
+
eta_3_5
|
| 2002 |
+
eta_3_6
|
| 2003 |
+
eta_3_7
|
| 2004 |
+
psi_3
|
| 2005 |
+
0.03 -
|
| 2006 |
+
0.00-
|
| 2007 |
+
0.0
|
| 2008 |
+
0.1
|
| 2009 |
+
0.2
|
| 2010 |
+
0.3
|
| 2011 |
+
0.4
|
| 2012 |
+
tSpringer Nature 2021 LATEX template
|
| 2013 |
+
16
|
| 2014 |
+
Density functions of periodic sequences
|
| 2015 |
+
Council (EP/R018472/1, EP/X018474/1) and the
|
| 2016 |
+
Royal Academy of Engineering Industrial Fellow-
|
| 2017 |
+
ship (IF2122/186) of the last author. We thank all
|
| 2018 |
+
reviewers for their time and helpful advice.
|
| 2019 |
+
References
|
| 2020 |
+
[1] Anosova,
|
| 2021 |
+
O.,
|
| 2022 |
+
Kurlin,
|
| 2023 |
+
V.:
|
| 2024 |
+
Introduction
|
| 2025 |
+
to
|
| 2026 |
+
periodic
|
| 2027 |
+
geometry
|
| 2028 |
+
and
|
| 2029 |
+
topology.
|
| 2030 |
+
arxiv:2103.02749 (2021)
|
| 2031 |
+
[2] Anosova, O., Kurlin, V.: An isometry clas-
|
| 2032 |
+
sification of periodic point sets. In: Lecture
|
| 2033 |
+
Notes in Computer Science (Proceedings of
|
| 2034 |
+
DGMM). vol. 12708, pp. 229–241 (2021)
|
| 2035 |
+
[3] Anosova, O., Kurlin, V.: Density functions of
|
| 2036 |
+
periodic sequences. In: Lecture Notes in Com-
|
| 2037 |
+
puter Science (Proceedings of DGMM). vol.
|
| 2038 |
+
13493, pp. 395–408 (2022)
|
| 2039 |
+
[4] Anosova, O., Kurlin, V.: Recognition of near-
|
| 2040 |
+
duplicate periodic patterns in polynomial
|
| 2041 |
+
time. arxiv:2205.15298 (2022)
|
| 2042 |
+
[5] Anosova, O.: R code for density functions
|
| 2043 |
+
of periodic sequences (2023), https://github.
|
| 2044 |
+
com/oanosova/DensityFunctions1D
|
| 2045 |
+
[6] Bright, M., Cooper, A.I., Kurlin, V.: Wel-
|
| 2046 |
+
come to a continuous world of 3-dimensional
|
| 2047 |
+
lattices. arxiv:2109.11538 (2021)
|
| 2048 |
+
[7] Bright,
|
| 2049 |
+
M.,
|
| 2050 |
+
Cooper,
|
| 2051 |
+
A.I.,
|
| 2052 |
+
Kurlin,
|
| 2053 |
+
V.:
|
| 2054 |
+
Geographic-style maps for 2-dimensional lat-
|
| 2055 |
+
tices.
|
| 2056 |
+
Acta
|
| 2057 |
+
Crystallographica
|
| 2058 |
+
Section
|
| 2059 |
+
A
|
| 2060 |
+
79(1), 1–13 (2023)
|
| 2061 |
+
[8] Edelsbrunner, H., Heiss, T., Kurlin, V.,
|
| 2062 |
+
Smith, P., Wintraecken, M.: The density fin-
|
| 2063 |
+
gerprint of a periodic point set. In: SoCG.
|
| 2064 |
+
vol. 189, pp. 32:1–32:16 (2021)
|
| 2065 |
+
[9] Gr¨unbaum, F., Moore, C.: The use of higher-
|
| 2066 |
+
order invariants in the determination of
|
| 2067 |
+
generalized patterson cyclotomic sets. Acta
|
| 2068 |
+
Cryst. A 51, 310–323 (1995)
|
| 2069 |
+
[10] Kurlin, V.: A complete isometry classification
|
| 2070 |
+
of 3D lattices. arxiv:2201.10543 (2022)
|
| 2071 |
+
[11] Kurlin, V.: Computable complete invari-
|
| 2072 |
+
ants for finite clouds of unlabeled points.
|
| 2073 |
+
arxiv:2207.08502 (2022), http://kurlin.org/
|
| 2074 |
+
projects/complete-isometry-invariants.pdf
|
| 2075 |
+
[12] Kurlin, V.: Exactly computable and contin-
|
| 2076 |
+
uous metrics on isometry classes of finite
|
| 2077 |
+
and 1-periodic sequences. arXiv:2205.04388
|
| 2078 |
+
(2022),
|
| 2079 |
+
http://kurlin.org/projects/
|
| 2080 |
+
periodic-geometry-topology/metric1D.pdf
|
| 2081 |
+
[13] Kurlin, V.: Mathematics of 2-dimensional lat-
|
| 2082 |
+
tices. Foundations of Computational Math-
|
| 2083 |
+
ematics (2022), http://kurlin.org/projects/
|
| 2084 |
+
lattice-geometry/lattices2Dmaths.pdf
|
| 2085 |
+
[14] Mosca, M., Kurlin, V.: Voronoi-based sim-
|
| 2086 |
+
ilarity distances between arbitrary crystal
|
| 2087 |
+
lattices. Crystal Research and Technology
|
| 2088 |
+
55(5), 1900197 (2020)
|
| 2089 |
+
[15] Pozdnyakov, S., et al.: Incompleteness of
|
| 2090 |
+
atomic structure representations. Phys. Rev.
|
| 2091 |
+
Let. 125, 166001 (2020)
|
| 2092 |
+
[16] Smith, P., Kurlin, V.: A practical algo-
|
| 2093 |
+
rithm for degree-k Voronoi domains of three-
|
| 2094 |
+
dimensional periodic point sets. In: Lecture
|
| 2095 |
+
Notes in Computer Science (Proceedings of
|
| 2096 |
+
ISVC). vol. 13599, pp. 377–391 (2022)
|
| 2097 |
+
[17] Smith, P., Kurlin, V.: Families of point
|
| 2098 |
+
sets
|
| 2099 |
+
with
|
| 2100 |
+
identical
|
| 2101 |
+
1D
|
| 2102 |
+
persistence,.
|
| 2103 |
+
arxiv:2202.00577
|
| 2104 |
+
(2022),
|
| 2105 |
+
http://kurlin.
|
| 2106 |
+
org/projects/periodic-geometry-topology/
|
| 2107 |
+
trivial-persistence.pdf
|
| 2108 |
+
[18] Widdowson,
|
| 2109 |
+
D.,
|
| 2110 |
+
Kurlin,
|
| 2111 |
+
V.:
|
| 2112 |
+
Pointwise
|
| 2113 |
+
distance
|
| 2114 |
+
distributions
|
| 2115 |
+
of
|
| 2116 |
+
periodic
|
| 2117 |
+
sets.
|
| 2118 |
+
arXiv:2108.04798 (version 1) (2021)
|
| 2119 |
+
[19] Widdowson,
|
| 2120 |
+
D.,
|
| 2121 |
+
Kurlin,
|
| 2122 |
+
V.:
|
| 2123 |
+
Resolving
|
| 2124 |
+
the data ambiguity for periodic crystals.
|
| 2125 |
+
Advances in Neural Information Process-
|
| 2126 |
+
ing
|
| 2127 |
+
Systems
|
| 2128 |
+
(arXiv:2108.04798,
|
| 2129 |
+
v2)
|
| 2130 |
+
35
|
| 2131 |
+
(2022), http://kurlin.org/projects/periodic+
|
| 2132 |
+
geometry/NeurIPS2022PDD.pdf
|
| 2133 |
+
[20] Widdowson, D., et al.: Average minimum dis-
|
| 2134 |
+
tances of periodic point sets. MATCH Comm.
|
| 2135 |
+
Math. Comp. Chemistry 87, 529–559 (2022)
|
| 2136 |
+
|
MNE4T4oBgHgl3EQfiw29/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
MdE1T4oBgHgl3EQftQUl/content/2301.03374v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bc0c51773605073b29f17ccb50987dfb951d1560bcc4e9776b3c30a6806a4a5c
|
| 3 |
+
size 8692727
|
OtAyT4oBgHgl3EQfUfen/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ac37ec030752deb69937cdae794be524b673730286f54de13f8e5fb734a9a796
|
| 3 |
+
size 3276845
|
OtAyT4oBgHgl3EQfUfen/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5e80552674d606850c456ed200b9a6f3e9357bda630ffa7365fa2057a43939d1
|
| 3 |
+
size 120390
|
PdE0T4oBgHgl3EQfkAEs/content/2301.02466v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:de288ecf10fb888fdb10618959b96999da403d8696e47ec0725b2fb388a38307
|
| 3 |
+
size 3971536
|
PdE0T4oBgHgl3EQfkAEs/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4266c243207217c1333dc9bfa1ad90865ee43030a73d7c4ca37d66294e855026
|
| 3 |
+
size 139122
|
PtAyT4oBgHgl3EQf7fqt/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:05f9fdfb61d46533f74d817bdd89532d48cc80026270358847cd434095918ccb
|
| 3 |
+
size 927998
|
QNFJT4oBgHgl3EQf2i1I/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fc34add24b7d56a8a21acae64aa0eea37d346480b1f6e1f3c457fe8a148843ef
|
| 3 |
+
size 188787
|
QNFRT4oBgHgl3EQfJje8/content/tmp_files/2301.13496v1.pdf.txt
ADDED
|
@@ -0,0 +1,1447 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
arXiv:2301.13496v1 [math.AP] 31 Jan 2023
|
| 2 |
+
Conditional regularity for the Navier–Stokes–Fourier system
|
| 3 |
+
with Dirichlet boundary conditions
|
| 4 |
+
Danica Basari´c ∗
|
| 5 |
+
Eduard Feireisl ∗
|
| 6 |
+
Hana Mizerov´a ∗,†
|
| 7 |
+
∗ Institute of Mathematics of the Czech Academy of Sciences
|
| 8 |
+
ˇZitn´a 25, CZ-115 67 Praha 1, Czech Republic
|
| 9 |
+
† Department of Mathematical Analysis and Numerical Mathematics, Comenius University
|
| 10 |
+
Mlynsk´a dolina, 842 48 Bratislava, Slovakia
|
| 11 |
+
Abstract
|
| 12 |
+
We consider the Navier–Stokes–Fourier system with the inhomogeneous boundary condi-
|
| 13 |
+
tions for the velocity and the temperature. We show that solutions emanating from sufficiently
|
| 14 |
+
regular data remain regular as long as the density ̺, the absolute temperature ϑ, and the
|
| 15 |
+
modulus of the fluid velocity |u| remain bounded.
|
| 16 |
+
Keywords: Navier–Stokes–Fourier system, conditional regularity, blow–up criterion, regular
|
| 17 |
+
solution
|
| 18 |
+
1
|
| 19 |
+
Introduction
|
| 20 |
+
Standard systems of equations in fluid mechanics including the Navier–Stokes–Fourier system
|
| 21 |
+
governing the motion of a compressible, viscous, and heat conducting fluid are well posed in the
|
| 22 |
+
class of strong solutions on a possibly short time interval [0, Tmax). The recent results of Merle at
|
| 23 |
+
al. [16], [17] strongly indicate that Tmax may be finite, at least in the idealized case of “isentropic”
|
| 24 |
+
viscous flow. Conditional regularity results guarantee that a blow up will not occur as soon as
|
| 25 |
+
some lower order norms of solutions are controlled.
|
| 26 |
+
We consider the Navier–Stokes–Fourier system governing the time evolution of the mass density
|
| 27 |
+
̺ = ̺(t, x), the (absolute) temperature ϑ = ϑ(t, x), and the velocity u = u(t, x) of a compressible,
|
| 28 |
+
viscous, and heat conducting fluid:
|
| 29 |
+
∗The work of D.B., E.F., and H.M. was supported by the Czech Sciences Foundation (GAˇCR), Grant Agreement
|
| 30 |
+
21–02411S. The Institute of Mathematics of the Czech Academy of Sciences is supported by RVO:67985840.
|
| 31 |
+
1
|
| 32 |
+
|
| 33 |
+
∂t̺ + divx(̺u) = 0,
|
| 34 |
+
(1.1)
|
| 35 |
+
∂t(̺u) + divx(̺u ⊗ u) + ∇xp(̺, ϑ) = divxS(Dxu) + ̺f, Dxu = 1
|
| 36 |
+
2
|
| 37 |
+
�
|
| 38 |
+
∇xu + ∇t
|
| 39 |
+
xu
|
| 40 |
+
�
|
| 41 |
+
,
|
| 42 |
+
(1.2)
|
| 43 |
+
∂t(̺e(̺, ϑ)) + divx(̺e(̺, ϑ)u) + divxq(∇xϑ) = S(Dxu) : Dxu − p(̺, ϑ)divxu.
|
| 44 |
+
(1.3)
|
| 45 |
+
The fluid is Newtonian, the viscous stress S is given by Newton’s rheological law
|
| 46 |
+
S(Dxu) = 2µ
|
| 47 |
+
�
|
| 48 |
+
Dxu − 1
|
| 49 |
+
3divxuI
|
| 50 |
+
�
|
| 51 |
+
+ ηdivxuI, µ > 0, η ≥ 0.
|
| 52 |
+
(1.4)
|
| 53 |
+
The heat flux obeys Fourier’s law
|
| 54 |
+
q(∇xϑ) = −κ∇xϑ, κ > 0.
|
| 55 |
+
(1.5)
|
| 56 |
+
The equation of state for the pressure p and the internal energy e is given by the standard Boyle–
|
| 57 |
+
Mariotte law of perfect gas,
|
| 58 |
+
p(̺, ϑ) = ̺ϑ, e(̺, ϑ) = cvϑ, cv > 0.
|
| 59 |
+
(1.6)
|
| 60 |
+
For the sake of simplicity, we suppose that the viscosity coefficients µ, η, the heat conductivity
|
| 61 |
+
coefficient κ as well as the specific heat at constant volume cv are constant.
|
| 62 |
+
There is a large number of recent results concerning conditional regularity for the Navier–
|
| 63 |
+
Stokes–Fourier system in terms of various norms. Fan, Jiang, and Ou [4] consider a bounded fluid
|
| 64 |
+
domain Ω ⊂ R3 with the conservative boundary conditions
|
| 65 |
+
u|∂Ω = 0, ∇xϑ · n|∂Ω = 0.
|
| 66 |
+
(1.7)
|
| 67 |
+
The same problem is studied by Sun, Wang, and Zhang [19] and later by Huang, Li, Wang [14].
|
| 68 |
+
There are results for the Cauchy problem Ω = R3 by Huang and Li [13], and Jiu, Wang and Ye
|
| 69 |
+
[15]. Possibly the best result so far has been established in [11], where the blow up criterion for
|
| 70 |
+
both the Cauchy problem and the boundary value problem (1.7) is formulated in terms of the
|
| 71 |
+
maximum of the density and a Serrin type regularity for the temperature:
|
| 72 |
+
lim sup
|
| 73 |
+
t→Tmax−
|
| 74 |
+
�
|
| 75 |
+
∥̺(t, ·)∥L∞ + ∥ϑ − ϑ∞∥Ls(0,t)(Lr)
|
| 76 |
+
�
|
| 77 |
+
= ∞, 3
|
| 78 |
+
2 < r ≤ ∞, 1 ≤ s ≤ ∞, 2
|
| 79 |
+
s + 3
|
| 80 |
+
r ≤ 2,
|
| 81 |
+
where ϑ∞ denotes the far field temperature in the Cauchy problem, cf. also the previous results
|
| 82 |
+
by Wen and Zhu [23], [24].
|
| 83 |
+
Much less is known in the case of the Dirichlet boundary conditions
|
| 84 |
+
u|∂Ω = uB, ϑ|∂Ω = ϑB.
|
| 85 |
+
(1.8)
|
| 86 |
+
2
|
| 87 |
+
|
| 88 |
+
Fan, Zhi, and Zhang [5] showed that a strong solution of the Navier–Stokes–Fourier system remains
|
| 89 |
+
regular up to a time T > 0 if (i) Ω ⊂ R2 is a bounded domain, (ii) uB = 0, ϑB = 0, and (iii)
|
| 90 |
+
lim sup
|
| 91 |
+
t→T−
|
| 92 |
+
(∥̺∥L∞ + ∥ϑ∥L∞) < ∞.
|
| 93 |
+
(1.9)
|
| 94 |
+
All results mentioned above describe fluids in a conservative regime, meaning solutions are
|
| 95 |
+
close to equilibrium in the long run. However, many real world applications concern fluids out of
|
| 96 |
+
equilibrium driven by possibly large driving forces f and/or inhomogeneous boundary conditions.
|
| 97 |
+
The iconic examples are the Rayleigh–B´enard and Taylor–Couette flows where the fluid is driven
|
| 98 |
+
to a turbulent regime by a large temperature gradient and large boundary velocity, respectively,
|
| 99 |
+
see Davidson [3].
|
| 100 |
+
Motivated by these physically relevant examples, we consider a fluid confined to a bounded
|
| 101 |
+
domain Ω ⊂ R3 with impermeable boundary, where the temperature and the (tangential) velocity
|
| 102 |
+
are given on ∂Ω,
|
| 103 |
+
ϑ|∂Ω = ϑB, ϑB = ϑB(x), ϑB > 0 on ∂Ω,
|
| 104 |
+
(1.10)
|
| 105 |
+
u|∂Ω = uB, uB = uB(x), uB · n = 0 on ∂Ω.
|
| 106 |
+
(1.11)
|
| 107 |
+
The initial state of the fluid is prescribed:
|
| 108 |
+
̺(0, ·) = ̺0, ̺0 > 0 in Ω, ϑ(0, ·) = ϑ0, ϑ0 > 0 in Ω, u(0, ·) = u0.
|
| 109 |
+
(1.12)
|
| 110 |
+
The initial and boundary data are supposed to satisfy suitable compatibility conditions specified
|
| 111 |
+
below.
|
| 112 |
+
The existence of local in time strong solutions for the problem (1.1)–(1.6), endowed with the
|
| 113 |
+
inhomogeneous boundary conditions (1.10), (1.11) was established by Valli [20], [21] , see also Valli
|
| 114 |
+
and Zajaczkowski [22]. The solution exists on a maximal time interval [0, Tmax), Tmax > 0. Our
|
| 115 |
+
goal is to show that if Tmax < ∞, then necessarily
|
| 116 |
+
lim sup
|
| 117 |
+
t→Tmax−
|
| 118 |
+
�
|
| 119 |
+
∥̺(t, ·)∥L∞(Ω) + ∥ϑ(t, ·)∥L∞(Ω) + ∥u(t, ·)∥L∞(Ω;R3)
|
| 120 |
+
�
|
| 121 |
+
= ∞.
|
| 122 |
+
(1.13)
|
| 123 |
+
The proof is based on deriving suitable a priori bounds assuming boundedness of all norms involved
|
| 124 |
+
in (1.13) as well as the norm of the initial/boundary data in a suitable function space. Although
|
| 125 |
+
approach shares some similarity with Fang, Zi, and Zhang [5], essential modifications must be
|
| 126 |
+
made to accommodate the inhomogeneous boundary data as well as the driving force f.
|
| 127 |
+
The
|
| 128 |
+
importance of conditional regularity results in numerical analysis of flows with uncertain initial
|
| 129 |
+
data was discussed recently in [7].
|
| 130 |
+
3
|
| 131 |
+
|
| 132 |
+
The paper is organized as follows. In Section 2, we introduce the class of strong solutions to the
|
| 133 |
+
Navier–Stokes–Fourier system and state our main result concerning conditional regularity. The
|
| 134 |
+
remaining part of the paper is devoted to the proof of the main result – deriving suitable a priori
|
| 135 |
+
bounds. In Section 3 we recall the standard energy estimates that hold even in the class of weak
|
| 136 |
+
solutions. Section 4 is the heart of the paper. We establish the necessary estimates on the velocity
|
| 137 |
+
gradient by means of the celebrated Gagliardo–Nirenberg interpolation inequality. In Section 5,
|
| 138 |
+
higher order estimates on the velocity gradient are derived, and, finally, the estimates are closed
|
| 139 |
+
by proving bounds on the temperature time derivative in Section 6. This last part borrows the
|
| 140 |
+
main ideas from [9].
|
| 141 |
+
2
|
| 142 |
+
Strong solutions, main result
|
| 143 |
+
We start the analysis by recalling the concept of strong solution introduced by Valli [21]. Similarly
|
| 144 |
+
to the boundary data uB, ϑB we suppose that the driving force f = f(x) is independent of time,
|
| 145 |
+
meaning we deal with an autonomous problem. Following [21], we suppose that Ω ⊂ R3 is a
|
| 146 |
+
bounded domain with ∂Ω of class C4.
|
| 147 |
+
We assume the data belong to the following class:
|
| 148 |
+
̺0 ∈ W 3,2(Ω), 0 < ̺0 ≤ min
|
| 149 |
+
x∈Ω ̺0(x),
|
| 150 |
+
ϑ0 ∈ W 3,2(Ω), 0 < ϑ0 ≤ min
|
| 151 |
+
x∈Ω ϑ0(x),
|
| 152 |
+
u0 ∈ W 3,2(Ω; R3),
|
| 153 |
+
ϑB ∈ W
|
| 154 |
+
7
|
| 155 |
+
2(∂Ω), 0 < ϑB ≤ min
|
| 156 |
+
x∈∂Ω ϑB(x),
|
| 157 |
+
uB ∈ W
|
| 158 |
+
7
|
| 159 |
+
2(∂Ω; R3), uB · n = 0,
|
| 160 |
+
f ∈ W 2,2(Ω; R3).
|
| 161 |
+
(2.1)
|
| 162 |
+
In addition, the data must satisfy the compatibility conditions
|
| 163 |
+
ϑ0 = ϑB, u0 = uB on ∂Ω,
|
| 164 |
+
̺0u0 · ∇xu0 + ∇xp(̺0, ϑ0) = divxS(Dxu0) + ̺0f on ∂Ω,
|
| 165 |
+
̺0u0 · ∇xϑ0 + divxq(ϑ0) = S(Dxu0) : Dxu0 − p(̺0, ϑ0)divxu0 on ∂Ω.
|
| 166 |
+
(2.2)
|
| 167 |
+
We set
|
| 168 |
+
D0 = max
|
| 169 |
+
�
|
| 170 |
+
∥(̺0, ϑ0, u0)∥W 3,2(Ω;R5), 1
|
| 171 |
+
̺0
|
| 172 |
+
,
|
| 173 |
+
1
|
| 174 |
+
ϑ0
|
| 175 |
+
, 1
|
| 176 |
+
ϑB
|
| 177 |
+
, ∥ϑB∥W
|
| 178 |
+
7
|
| 179 |
+
2 (∂Ω), ∥uB∥W
|
| 180 |
+
7
|
| 181 |
+
2 (∂Ω;R3), ∥f∥W 2,2(Ω;R3)
|
| 182 |
+
�
|
| 183 |
+
.
|
| 184 |
+
(2.3)
|
| 185 |
+
4
|
| 186 |
+
|
| 187 |
+
2.1
|
| 188 |
+
Local existence
|
| 189 |
+
The following result was proved by Valli [21, Theorem A] (see also [20]).
|
| 190 |
+
Theorem 2.1. (Local existence of strong solutions) Let Ω ⊂ R3 be a bounded domain of
|
| 191 |
+
class C4. Suppose that the data (̺0, ϑ0, u0), (ϑB, uB) and f belong to the class (2.1) and satisfy
|
| 192 |
+
the compatibility conditions (2.2).
|
| 193 |
+
Then there exists a maximal time Tmax > 0 such that the Navier–Stokes–Fourier system (1.1)–
|
| 194 |
+
(1.6), with the boundary conditions (1.10), (1.11), and the initial conditions (1.12) admits a solu-
|
| 195 |
+
tion (̺, ϑ, u) in [0, Tmax) × Ω unique in the class
|
| 196 |
+
̺, ϑ ∈ C([0, T]; W 3,2(Ω)), u ∈ C([0, T]; W 3,2(Ω; R3)),
|
| 197 |
+
ϑ ∈ L2(0, T; W 4,2(Ω)), u ∈ L2(0, T; W 4,2(Ω; R3))
|
| 198 |
+
(2.4)
|
| 199 |
+
for any 0 < T < Tmax. The existence time Tmax is bounded below by a quantity c(D0) depending
|
| 200 |
+
solely on the norms of the data specified in (2.3). In particular,
|
| 201 |
+
lim
|
| 202 |
+
τ→Tmax− ∥(̺, ϑ, u)(τ, ·)∥W 3,2(Ω;R5) = ∞.
|
| 203 |
+
(2.5)
|
| 204 |
+
2.2
|
| 205 |
+
Blow up criterion, conditional regularity
|
| 206 |
+
Our goal is to show the following result.
|
| 207 |
+
Theorem 2.2. (Blow up criterion) Under the hypotheses of Theorem 2.1, suppose that the
|
| 208 |
+
maximal existence time Tmax < ∞ is finite.
|
| 209 |
+
Then
|
| 210 |
+
lim sup
|
| 211 |
+
τ→Tmax−
|
| 212 |
+
∥(̺, ϑ, u)(τ, ·)∥L∞(Ω;R5) = ∞.
|
| 213 |
+
(2.6)
|
| 214 |
+
Theorem 2.2 is in the spirit of the blow up criteria for general parabolic systems – the solution
|
| 215 |
+
remains regular as long as it is bounded. Of course, our problem in question is of mixed hyperbolic–
|
| 216 |
+
parabolic type.
|
| 217 |
+
The proof of Theorem 2.2 follows from suitable a priori bounds applied on a compact time
|
| 218 |
+
interval.
|
| 219 |
+
Proposition 2.3. (Conditional regularity)
|
| 220 |
+
Under the hypotheses of Theorem 2.1, let (̺, ϑ, u) be the strong solution of the Navier–Stokes–
|
| 221 |
+
Fourier system belonging to the class (2.4) and satisfying
|
| 222 |
+
sup
|
| 223 |
+
(τ,x)∈[0,T)×Ω
|
| 224 |
+
̺(τ, x) ≤ ̺,
|
| 225 |
+
sup
|
| 226 |
+
(τ,x)∈[0,T)×Ω
|
| 227 |
+
ϑ(τ, x) ≤ ϑ,
|
| 228 |
+
sup
|
| 229 |
+
(τ,x)∈[0,T)×Ω
|
| 230 |
+
|u(τ, x)| ≤ u
|
| 231 |
+
(2.7)
|
| 232 |
+
5
|
| 233 |
+
|
| 234 |
+
for some T < Tmax.
|
| 235 |
+
Then there is a quantity c(T, D0, ̺, ϑ, u), bounded for bounded arguments, such that
|
| 236 |
+
sup
|
| 237 |
+
τ∈[0,T)
|
| 238 |
+
max
|
| 239 |
+
�
|
| 240 |
+
∥(̺, ϑ, u)(τ, ·)∥W 3,2(Ω;R5); sup
|
| 241 |
+
x∈Ω
|
| 242 |
+
1
|
| 243 |
+
̺(τ, x); sup
|
| 244 |
+
x∈Ω
|
| 245 |
+
1
|
| 246 |
+
ϑ(τ, x)
|
| 247 |
+
�
|
| 248 |
+
≤ c(T, D0, ̺, ϑ, u).
|
| 249 |
+
(2.8)
|
| 250 |
+
In view of Theorem 2.1, the conclusion of Theorem 2.2 follows from Proposition 2.3. The rest
|
| 251 |
+
of the paper is therefore devoted to the proof of Proposition 2.3.
|
| 252 |
+
Remark 2.4. As observed in [8], the conditional regularity results established in Proposition 2.3
|
| 253 |
+
gives rise to stability with respect to the data. More specifically, the maximal existence time Tmax
|
| 254 |
+
is a lower semicontinuous function of the data with respect to the topologies in (2.1).
|
| 255 |
+
Remark 2.5. Conditional regularity results in combination with the weak–strong uniqueness
|
| 256 |
+
principle in the class of measure–valued solutions is an efficient tool for proving convergence of
|
| 257 |
+
numerical schemes, see [6, Chapter 11]. The concept of measure–valued solutions to the Navier–
|
| 258 |
+
Stokes–Fourier system with inhomogeneous Dirichlet boundary conditions has been introduced
|
| 259 |
+
recently by Chaudhuri [1].
|
| 260 |
+
3
|
| 261 |
+
Energy estimates
|
| 262 |
+
To begin, it is suitable to extend the boundary data into Ω. For definiteness, we consider the
|
| 263 |
+
(unique) solutions of the Dirichlet problem
|
| 264 |
+
∆x ˜ϑ = 0 in Ω, ˜ϑ|∂Ω = ϑB,
|
| 265 |
+
divxS(Dx˜u) = 0 in Ω, ˜u|∂Ω = uB.
|
| 266 |
+
(3.1)
|
| 267 |
+
By abuse of notation, we use the same symbol ϑB, uB for both the boundary values and their C1
|
| 268 |
+
extensions ˜ϑ = ˜ϑ(x), ˜u = ˜u(x) inside Ω.
|
| 269 |
+
We start with the ballistic energy equality, see [2, Section 2.4],
|
| 270 |
+
d
|
| 271 |
+
dt
|
| 272 |
+
�
|
| 273 |
+
Ω
|
| 274 |
+
�1
|
| 275 |
+
2̺|u − uB|2 + ̺e − ϑB̺s
|
| 276 |
+
�
|
| 277 |
+
dx +
|
| 278 |
+
�
|
| 279 |
+
Ω
|
| 280 |
+
ϑB
|
| 281 |
+
ϑ
|
| 282 |
+
�
|
| 283 |
+
S(Dxu) : Dxu + κ|∇xϑ|2
|
| 284 |
+
ϑ
|
| 285 |
+
�
|
| 286 |
+
dx
|
| 287 |
+
= −
|
| 288 |
+
�
|
| 289 |
+
Ω
|
| 290 |
+
�
|
| 291 |
+
̺u ⊗ u + pI − S(Dxu)
|
| 292 |
+
�
|
| 293 |
+
: DxuB dx + 1
|
| 294 |
+
2
|
| 295 |
+
�
|
| 296 |
+
Ω
|
| 297 |
+
̺u · ∇x|uB|2 dx
|
| 298 |
+
+
|
| 299 |
+
�
|
| 300 |
+
Ω
|
| 301 |
+
̺(u − uB) · f dx −
|
| 302 |
+
�
|
| 303 |
+
Ω
|
| 304 |
+
̺su · ∇xϑB dx + κ
|
| 305 |
+
�
|
| 306 |
+
Ω
|
| 307 |
+
∇xϑ
|
| 308 |
+
ϑ
|
| 309 |
+
· ∇xϑB dx,
|
| 310 |
+
(3.2)
|
| 311 |
+
where we have introduced the entropy
|
| 312 |
+
s = cv log(ϑ) − log(̺).
|
| 313 |
+
6
|
| 314 |
+
|
| 315 |
+
Thus the choice (3.1) yields the following bounds
|
| 316 |
+
sup
|
| 317 |
+
t∈[0,T)
|
| 318 |
+
�
|
| 319 |
+
Ω
|
| 320 |
+
̺| log(ϑ)|(t, ·) dx ≤ c(T, D0, ̺, ϑ, u),
|
| 321 |
+
(3.3)
|
| 322 |
+
� T
|
| 323 |
+
0
|
| 324 |
+
�
|
| 325 |
+
Ω
|
| 326 |
+
|∇xu|2 dx dt ≤ C(̺, ϑ, u; data) ⇒
|
| 327 |
+
� T
|
| 328 |
+
0
|
| 329 |
+
∥u∥2
|
| 330 |
+
W 1,2(Ω;R3) dt ≤ c(T, D0, ̺, ϑ, u),
|
| 331 |
+
(3.4)
|
| 332 |
+
� T
|
| 333 |
+
0
|
| 334 |
+
�
|
| 335 |
+
Ω
|
| 336 |
+
�
|
| 337 |
+
|∇xϑ|2 + |∇x log(ϑ)|2�
|
| 338 |
+
dx dt ≤ c(T, D0, ̺, ϑ, u),
|
| 339 |
+
⇒
|
| 340 |
+
� T
|
| 341 |
+
0
|
| 342 |
+
∥ϑ∥2
|
| 343 |
+
W 1,2(Ω) dt +
|
| 344 |
+
� T
|
| 345 |
+
0
|
| 346 |
+
∥ log(ϑ)∥2
|
| 347 |
+
W 1,2(Ω) dt ≤ c(T, D0, ̺, ϑ, u).
|
| 348 |
+
(3.5)
|
| 349 |
+
4
|
| 350 |
+
Estimates of the velocity gradient
|
| 351 |
+
This section is the heart of the paper. In principle, we follow the arguments similar to Fang, Zi,
|
| 352 |
+
and Zhang [5, Section 3] but here adapted to the inhomogeneous boundary conditions.
|
| 353 |
+
4.1
|
| 354 |
+
Estimates of the velocity material derivative
|
| 355 |
+
Let us introduce the material derivative of a function g,
|
| 356 |
+
Dtg = ∂tg + u · ∇xg.
|
| 357 |
+
Accordingly, we may rewrite the momentum equation (1.2) as
|
| 358 |
+
̺Dtu + ∇xp = divxS + ̺f.
|
| 359 |
+
(4.1)
|
| 360 |
+
Now, consider the scalar product of the momentum equation (4.1) with Dt(u − uB),
|
| 361 |
+
̺|Dtu|2 + ∇xp · Dt(u − uB) = divxS(Dxu) · Dt(u − uB) + ̺f · Dt(u − uB) + ̺Dtu · DtuB. (4.2)
|
| 362 |
+
The next step is integrating (4.2) over Ω. Here and hereafter we use the hypothesis uB·n|∂Ω = 0
|
| 363 |
+
yielding
|
| 364 |
+
Dt(u − uB)|∂Ω = (∂tu − u · ∇x(u − uB)) |∂Ω = −uB · ∇x(u − uB)|∂Ω = 0.
|
| 365 |
+
(4.3)
|
| 366 |
+
Writing
|
| 367 |
+
divxS(Dxu) = µ∆xu +
|
| 368 |
+
�
|
| 369 |
+
η + µ
|
| 370 |
+
3
|
| 371 |
+
�
|
| 372 |
+
∇xdivxu,
|
| 373 |
+
and making use of (4.3) we obtain
|
| 374 |
+
�
|
| 375 |
+
Ω
|
| 376 |
+
divxS(Dxu) · Dt(u − uB) dx
|
| 377 |
+
7
|
| 378 |
+
|
| 379 |
+
= −
|
| 380 |
+
�
|
| 381 |
+
Ω
|
| 382 |
+
S(Dxu) : ∇x∂tu dx
|
| 383 |
+
− µ
|
| 384 |
+
�
|
| 385 |
+
Ω
|
| 386 |
+
∇xu : ∇x
|
| 387 |
+
�
|
| 388 |
+
u · ∇x(u − uB)
|
| 389 |
+
�
|
| 390 |
+
dx −
|
| 391 |
+
�
|
| 392 |
+
η + µ
|
| 393 |
+
3
|
| 394 |
+
� �
|
| 395 |
+
Ω
|
| 396 |
+
divxu divx
|
| 397 |
+
�
|
| 398 |
+
u · ∇x(u − uB)
|
| 399 |
+
�
|
| 400 |
+
dx
|
| 401 |
+
= − 1
|
| 402 |
+
2
|
| 403 |
+
d
|
| 404 |
+
dt
|
| 405 |
+
�
|
| 406 |
+
Ω
|
| 407 |
+
S(Dxu) : Dxu dx
|
| 408 |
+
− µ
|
| 409 |
+
�
|
| 410 |
+
Ω
|
| 411 |
+
∇xu : ∇x
|
| 412 |
+
�
|
| 413 |
+
u · ∇x(u − uB)
|
| 414 |
+
�
|
| 415 |
+
dx −
|
| 416 |
+
�
|
| 417 |
+
η + µ
|
| 418 |
+
3
|
| 419 |
+
� �
|
| 420 |
+
Ω
|
| 421 |
+
divxu divx
|
| 422 |
+
�
|
| 423 |
+
u · ∇x(u − uB)
|
| 424 |
+
�
|
| 425 |
+
dx,
|
| 426 |
+
(4.4)
|
| 427 |
+
where, furthermore,
|
| 428 |
+
�
|
| 429 |
+
Ω
|
| 430 |
+
∇xu : ∇x(u · ∇xu) dx =
|
| 431 |
+
�
|
| 432 |
+
Ω
|
| 433 |
+
∇xu : (∇xu · ∇xu) dx + 1
|
| 434 |
+
2
|
| 435 |
+
�
|
| 436 |
+
Ω
|
| 437 |
+
u · ∇x|∇xu|2 dx
|
| 438 |
+
=
|
| 439 |
+
�
|
| 440 |
+
Ω
|
| 441 |
+
∇xu : (∇xu · ∇xu) dx − 1
|
| 442 |
+
2
|
| 443 |
+
�
|
| 444 |
+
Ω
|
| 445 |
+
divxu|∇xu|2 dx
|
| 446 |
+
(4.5)
|
| 447 |
+
Note carefully we have used u · n|∂Ω = 0 in the last integration. Similarly,
|
| 448 |
+
�
|
| 449 |
+
Ω
|
| 450 |
+
divxu divx(u · ∇xu) dx =
|
| 451 |
+
�
|
| 452 |
+
Ω
|
| 453 |
+
divxu ∇xu : ∇t
|
| 454 |
+
xu dx − 1
|
| 455 |
+
2
|
| 456 |
+
�
|
| 457 |
+
Ω
|
| 458 |
+
(divxu)3 dx.
|
| 459 |
+
(4.6)
|
| 460 |
+
Thus summing up the previous observations, we get
|
| 461 |
+
1
|
| 462 |
+
2
|
| 463 |
+
d
|
| 464 |
+
dt
|
| 465 |
+
�
|
| 466 |
+
Ω
|
| 467 |
+
S(Dxu) : Dxu dx + 1
|
| 468 |
+
2
|
| 469 |
+
�
|
| 470 |
+
Ω
|
| 471 |
+
̺|Dtu|2 dx +
|
| 472 |
+
�
|
| 473 |
+
Ω
|
| 474 |
+
∇xp · Dt(u − uB) dx
|
| 475 |
+
≤ c(T, D0, ̺, ϑ, u)
|
| 476 |
+
�
|
| 477 |
+
1 +
|
| 478 |
+
�
|
| 479 |
+
Ω
|
| 480 |
+
|∇xu|3 dx
|
| 481 |
+
�
|
| 482 |
+
.
|
| 483 |
+
(4.7)
|
| 484 |
+
Moreover,
|
| 485 |
+
�
|
| 486 |
+
Ω
|
| 487 |
+
∇xp · Dt(u − uB) dx = −
|
| 488 |
+
�
|
| 489 |
+
Ω
|
| 490 |
+
p divx(Dt(u − uB)) dx
|
| 491 |
+
= −
|
| 492 |
+
�
|
| 493 |
+
Ω
|
| 494 |
+
p divxDtu dx +
|
| 495 |
+
�
|
| 496 |
+
Ω
|
| 497 |
+
p divx(u · ∇xuB) dx,
|
| 498 |
+
(4.8)
|
| 499 |
+
where
|
| 500 |
+
p divxDtu = ∂t(p divxu) −
|
| 501 |
+
�
|
| 502 |
+
∂tp + divx(pu)
|
| 503 |
+
�
|
| 504 |
+
divxu + divx(pu)divxu + p divx(u · ∇xu)
|
| 505 |
+
= ∂t(p divxu) −
|
| 506 |
+
�
|
| 507 |
+
∂tp + divx(pu)
|
| 508 |
+
�
|
| 509 |
+
divxu + p∇xu : ∇t
|
| 510 |
+
xu + divx
|
| 511 |
+
�
|
| 512 |
+
pu divxu
|
| 513 |
+
�
|
| 514 |
+
.
|
| 515 |
+
As u · n|∂Ω = 0, we have
|
| 516 |
+
�
|
| 517 |
+
Ω
|
| 518 |
+
divx
|
| 519 |
+
�
|
| 520 |
+
pu divxu
|
| 521 |
+
�
|
| 522 |
+
dx = 0,
|
| 523 |
+
8
|
| 524 |
+
|
| 525 |
+
and the above estimates together with (4.7) give rise to
|
| 526 |
+
1
|
| 527 |
+
2
|
| 528 |
+
d
|
| 529 |
+
dt
|
| 530 |
+
�
|
| 531 |
+
Ω
|
| 532 |
+
S(Dxu) : Dxu dx − d
|
| 533 |
+
dt
|
| 534 |
+
�
|
| 535 |
+
Ω
|
| 536 |
+
pdivxu dx + 1
|
| 537 |
+
2
|
| 538 |
+
�
|
| 539 |
+
Ω
|
| 540 |
+
̺|Dtu|2 dx
|
| 541 |
+
≤ c(T, D0, ̺, ϑ, u)
|
| 542 |
+
�
|
| 543 |
+
1 +
|
| 544 |
+
�
|
| 545 |
+
Ω
|
| 546 |
+
|∇xu|3 dx
|
| 547 |
+
�
|
| 548 |
+
−
|
| 549 |
+
�
|
| 550 |
+
Ω
|
| 551 |
+
�
|
| 552 |
+
∂tp + divx(pu)
|
| 553 |
+
�
|
| 554 |
+
divxu dx.
|
| 555 |
+
Finally, we realize
|
| 556 |
+
∂tp + divx(pu) = ̺Dtϑ
|
| 557 |
+
to conclude
|
| 558 |
+
1
|
| 559 |
+
2
|
| 560 |
+
d
|
| 561 |
+
dt
|
| 562 |
+
�
|
| 563 |
+
Ω
|
| 564 |
+
S(Dxu) : Dxu dx − d
|
| 565 |
+
dt
|
| 566 |
+
�
|
| 567 |
+
Ω
|
| 568 |
+
pdivxu dx + 1
|
| 569 |
+
2
|
| 570 |
+
�
|
| 571 |
+
Ω
|
| 572 |
+
̺|Dtu|2 dx
|
| 573 |
+
≤ c(T, D0, ̺, ϑ, u)
|
| 574 |
+
�
|
| 575 |
+
1 +
|
| 576 |
+
�
|
| 577 |
+
Ω
|
| 578 |
+
̺|Dtϑ||∇xu| dx +
|
| 579 |
+
�
|
| 580 |
+
Ω
|
| 581 |
+
|∇xu|3 dx
|
| 582 |
+
�
|
| 583 |
+
.
|
| 584 |
+
(4.9)
|
| 585 |
+
4.2
|
| 586 |
+
Higher order velocity material derivative estimates
|
| 587 |
+
Following [5, Section 3, Lemma 3.3], see also Hoff [12], we deduce
|
| 588 |
+
̺D2
|
| 589 |
+
t u + ∇x∂tp + divx(∇xp ⊗ u)
|
| 590 |
+
= µ
|
| 591 |
+
�
|
| 592 |
+
∆x∂tu + divx(∆xu ⊗ u)
|
| 593 |
+
�
|
| 594 |
+
+
|
| 595 |
+
�
|
| 596 |
+
η + µ
|
| 597 |
+
3
|
| 598 |
+
� �
|
| 599 |
+
∇xdivx∂tu + divx ((∇xdivxu) ⊗ u)
|
| 600 |
+
�
|
| 601 |
+
+ ̺u · ∇xf.
|
| 602 |
+
(4.10)
|
| 603 |
+
Next, we compute
|
| 604 |
+
DtuB = u · ∇xuB,
|
| 605 |
+
D2
|
| 606 |
+
t uB = ∂tu · ∇xuB + u · ∇x(u · ∇xuB)
|
| 607 |
+
= Dtu · ∇xuB − (u · ∇xu) · ∇xuB + u · ∇x(u · ∇xuB)
|
| 608 |
+
= Dtu · ∇xuB + (u ⊗ u) : ∇2
|
| 609 |
+
xuB.
|
| 610 |
+
(4.11)
|
| 611 |
+
Consequently, we may rewrite (4.10) in the form
|
| 612 |
+
̺D2
|
| 613 |
+
t (u − uB) + ∇x∂tp + divx(∇xp ⊗ u)
|
| 614 |
+
= µ
|
| 615 |
+
�
|
| 616 |
+
∆x∂tu + divx(∆xu ⊗ u)
|
| 617 |
+
�
|
| 618 |
+
+
|
| 619 |
+
�
|
| 620 |
+
η + µ
|
| 621 |
+
3
|
| 622 |
+
� �
|
| 623 |
+
∇xdivx∂tu + divx ((∇xdivxu) ⊗ u)
|
| 624 |
+
�
|
| 625 |
+
+ ̺u · ∇xf
|
| 626 |
+
− ̺Dtu · ∇xuB − ̺(u ⊗ u) : ∇2
|
| 627 |
+
xuB.
|
| 628 |
+
(4.12)
|
| 629 |
+
The next step is considering the scalar product of (4.12) with Dt(u − uB) and integrating over
|
| 630 |
+
Ω. The resulting integrals can be handled as follows:
|
| 631 |
+
̺D2
|
| 632 |
+
t (u − uB) · Dt(u − uB) = ̺1
|
| 633 |
+
2Dt|Dt(u − uB)|2
|
| 634 |
+
9
|
| 635 |
+
|
| 636 |
+
= 1
|
| 637 |
+
2̺
|
| 638 |
+
�
|
| 639 |
+
∂t|Dt(u − uB)|2 + u · ∇x|Dt(u − uB)|2�
|
| 640 |
+
= 1
|
| 641 |
+
2∂t
|
| 642 |
+
�
|
| 643 |
+
̺|Dt(u − uB)|2�
|
| 644 |
+
+ 1
|
| 645 |
+
2divx
|
| 646 |
+
�
|
| 647 |
+
̺u|Dt(u − uB)|2�
|
| 648 |
+
,
|
| 649 |
+
where we have used the equation of continuity (1.1). Seeing that u · n|∂Ω = 0 we get
|
| 650 |
+
�
|
| 651 |
+
Ω
|
| 652 |
+
̺D2
|
| 653 |
+
t (u − uB) · Dt(u − uB) dx = d
|
| 654 |
+
dt
|
| 655 |
+
1
|
| 656 |
+
2
|
| 657 |
+
�
|
| 658 |
+
Ω
|
| 659 |
+
̺|Dt(u − uB)|2 dx.
|
| 660 |
+
(4.13)
|
| 661 |
+
Similarly,
|
| 662 |
+
�
|
| 663 |
+
Ω
|
| 664 |
+
�
|
| 665 |
+
∇x∂tp + divx(∇xp ⊗ u)
|
| 666 |
+
�
|
| 667 |
+
· Dt(u − uB) dx
|
| 668 |
+
= −
|
| 669 |
+
�
|
| 670 |
+
Ω
|
| 671 |
+
�
|
| 672 |
+
∂tp + divx(pu)
|
| 673 |
+
�
|
| 674 |
+
divxDt(u − uB) dx
|
| 675 |
+
+
|
| 676 |
+
�
|
| 677 |
+
Ω
|
| 678 |
+
�
|
| 679 |
+
divx(pu)divxDt(u − uB) − ∇xp ⊗ u : ∇xDt(u − uB)
|
| 680 |
+
�
|
| 681 |
+
dx,
|
| 682 |
+
(4.14)
|
| 683 |
+
where
|
| 684 |
+
�
|
| 685 |
+
Ω
|
| 686 |
+
∇xp ⊗ u : ∇xDt(u − uB) dx
|
| 687 |
+
= −
|
| 688 |
+
�
|
| 689 |
+
Ω
|
| 690 |
+
p∇xu : ∇xDt(u − uB) dx +
|
| 691 |
+
�
|
| 692 |
+
Ω
|
| 693 |
+
∇x(pu) : ∇xDt(u − uB) dx.
|
| 694 |
+
In addition, as Dt(u−uB) vanishes on ∂Ω, we can perform by parts integration in the last integral
|
| 695 |
+
obtaining
|
| 696 |
+
�
|
| 697 |
+
Ω
|
| 698 |
+
∇x(pu) : ∇xDt(u − uB) dx =
|
| 699 |
+
�
|
| 700 |
+
Ω
|
| 701 |
+
divx(pu)divxDt(u − uB) dx.
|
| 702 |
+
Thus, similarly to the preceding section, we conclude
|
| 703 |
+
�
|
| 704 |
+
Ω
|
| 705 |
+
�
|
| 706 |
+
∇x∂tp + divx(∇xp ⊗ u)
|
| 707 |
+
�
|
| 708 |
+
· Dt(u − uB) dx
|
| 709 |
+
= −
|
| 710 |
+
�
|
| 711 |
+
Ω
|
| 712 |
+
̺DtϑdivxDt(u − uB) dx +
|
| 713 |
+
�
|
| 714 |
+
Ω
|
| 715 |
+
p∇xu : ∇xDt(u − uB) dx.
|
| 716 |
+
(4.15)
|
| 717 |
+
Analogously,
|
| 718 |
+
�
|
| 719 |
+
Ω
|
| 720 |
+
�
|
| 721 |
+
∆x∂tu + divx(∆xu ⊗ u)
|
| 722 |
+
�
|
| 723 |
+
· Dt(u − uB) dx
|
| 724 |
+
= −
|
| 725 |
+
�
|
| 726 |
+
Ω
|
| 727 |
+
∇x∂tu : ∇xDt(u − uB) dx −
|
| 728 |
+
�
|
| 729 |
+
Ω
|
| 730 |
+
(∆xu ⊗ u) : ∇xDt(u − uB) dx
|
| 731 |
+
= −
|
| 732 |
+
�
|
| 733 |
+
Ω
|
| 734 |
+
∇xDtu : ∇xDt(u − uB) dx −
|
| 735 |
+
�
|
| 736 |
+
Ω
|
| 737 |
+
�
|
| 738 |
+
∆xu ⊗ u − ∇x(u · ∇xu)
|
| 739 |
+
�
|
| 740 |
+
: ∇xDt(u − uB) dx, (4.16)
|
| 741 |
+
10
|
| 742 |
+
|
| 743 |
+
where, using summation convention,
|
| 744 |
+
�
|
| 745 |
+
Ω
|
| 746 |
+
�
|
| 747 |
+
∆xu ⊗ u
|
| 748 |
+
�
|
| 749 |
+
: ∇xDt(u − uB) dx
|
| 750 |
+
=
|
| 751 |
+
�
|
| 752 |
+
Ω
|
| 753 |
+
∂xk
|
| 754 |
+
�
|
| 755 |
+
uj∂xkui
|
| 756 |
+
�
|
| 757 |
+
∂xjDt(u − uB)i dx −
|
| 758 |
+
�
|
| 759 |
+
Ω
|
| 760 |
+
∂xkui∂xkuj∂xjDt(u − uB)i dx
|
| 761 |
+
=
|
| 762 |
+
�
|
| 763 |
+
Ω
|
| 764 |
+
∂xj
|
| 765 |
+
�
|
| 766 |
+
uj∂xkui
|
| 767 |
+
�
|
| 768 |
+
∂xkDt(u − uB)i dx −
|
| 769 |
+
�
|
| 770 |
+
Ω
|
| 771 |
+
∂xkui∂xkuj∂xjDt(u − uB)i dx
|
| 772 |
+
=
|
| 773 |
+
�
|
| 774 |
+
Ω
|
| 775 |
+
divxu ∇xu : ∇xDt(u − uB) dx
|
| 776 |
+
+
|
| 777 |
+
�
|
| 778 |
+
Ω
|
| 779 |
+
�
|
| 780 |
+
uj∂xk∂xjui
|
| 781 |
+
�
|
| 782 |
+
∂xkDt(u − uB)i dx −
|
| 783 |
+
�
|
| 784 |
+
Ω
|
| 785 |
+
∂xkui∂xkuj∂xjDt(u − uB)i dx
|
| 786 |
+
=
|
| 787 |
+
�
|
| 788 |
+
Ω
|
| 789 |
+
∇x(u · ∇xu) : ∇xDt(u − uB) dx +
|
| 790 |
+
�
|
| 791 |
+
Ω
|
| 792 |
+
divxu ∇xu : ∇xDt(u − uB) dx
|
| 793 |
+
−
|
| 794 |
+
�
|
| 795 |
+
Ω
|
| 796 |
+
∂xjui∂xkuj∂xkDt(u − uB)i dx −
|
| 797 |
+
�
|
| 798 |
+
Ω
|
| 799 |
+
∂xkui∂xkuj∂xjDt(u − uB)i dx.
|
| 800 |
+
(4.17)
|
| 801 |
+
Summing up (4.16), (4.17) we conclude
|
| 802 |
+
�
|
| 803 |
+
Ω
|
| 804 |
+
�
|
| 805 |
+
∆x∂tu + divx(∆xu ⊗ u)
|
| 806 |
+
�
|
| 807 |
+
· Dt(u − uB) dx
|
| 808 |
+
= −
|
| 809 |
+
�
|
| 810 |
+
Ω
|
| 811 |
+
∇xDtu : ∇xDt(u − uB) dx −
|
| 812 |
+
�
|
| 813 |
+
Ω
|
| 814 |
+
divxu ∇xu : ∇xDt(u − uB) dx
|
| 815 |
+
+
|
| 816 |
+
�
|
| 817 |
+
Ω
|
| 818 |
+
∂xjui∂xkuj∂xkDt(u − uB)i dx +
|
| 819 |
+
�
|
| 820 |
+
Ω
|
| 821 |
+
∂xkui∂xkuj∂xjDt(u − uB)i dx.
|
| 822 |
+
(4.18)
|
| 823 |
+
Estimating the remaining integrals in (4.12) in a similar manner we may infer
|
| 824 |
+
1
|
| 825 |
+
2
|
| 826 |
+
d
|
| 827 |
+
dt
|
| 828 |
+
�
|
| 829 |
+
Ω
|
| 830 |
+
̺|Dt(u − uB)|2 dx + µ
|
| 831 |
+
�
|
| 832 |
+
Ω
|
| 833 |
+
|∇xDt(u − uB)|2 dx +
|
| 834 |
+
�
|
| 835 |
+
η + µ
|
| 836 |
+
3
|
| 837 |
+
� �
|
| 838 |
+
Ω
|
| 839 |
+
|divxDt(u − uB)|2 dx
|
| 840 |
+
≤ c(T, D0, ̺, ϑ, u)
|
| 841 |
+
�
|
| 842 |
+
1 +
|
| 843 |
+
�
|
| 844 |
+
Ω
|
| 845 |
+
̺|Dtϑ|2 dx +
|
| 846 |
+
�
|
| 847 |
+
Ω
|
| 848 |
+
|∇xu|4 dx +
|
| 849 |
+
�
|
| 850 |
+
Ω
|
| 851 |
+
̺|Dtu|2 dx
|
| 852 |
+
�
|
| 853 |
+
.
|
| 854 |
+
(4.19)
|
| 855 |
+
cf. [5, Section 3, Lemma 3.3].
|
| 856 |
+
4.3
|
| 857 |
+
Velocity decomposition
|
| 858 |
+
Following the original idea of Sun, Wang, and Zhang [18], we decompose the velocity field in the
|
| 859 |
+
form:
|
| 860 |
+
u = v + w,
|
| 861 |
+
(4.20)
|
| 862 |
+
divxS(Dxv) = ∇xp in (0, T) × Ω, v|∂Ω = 0,
|
| 863 |
+
(4.21)
|
| 864 |
+
11
|
| 865 |
+
|
| 866 |
+
divxS(Dxw) = ̺Dtu − ̺f in (0, T) × Ω, w|∂Ω = uB.
|
| 867 |
+
(4.22)
|
| 868 |
+
Since
|
| 869 |
+
divxS(Dx∂tv) = ∇x∂tp in (0, T) × Ω, v|∂Ω = 0,
|
| 870 |
+
we get
|
| 871 |
+
�
|
| 872 |
+
Ω
|
| 873 |
+
∂tp divxv dx = −
|
| 874 |
+
�
|
| 875 |
+
Ω
|
| 876 |
+
∇x∂tp · v dx = 1
|
| 877 |
+
2
|
| 878 |
+
d
|
| 879 |
+
dt
|
| 880 |
+
�
|
| 881 |
+
Ω
|
| 882 |
+
S(Dxv) : Dxv dx.
|
| 883 |
+
(4.23)
|
| 884 |
+
Moreover, the standard elliptic estimates for the Lam´e operator yield:
|
| 885 |
+
∥v∥W 1,q(Ω;R3) ≤ c(q, ̺, ϑ) for all 1 ≤ q < ∞,
|
| 886 |
+
(4.24)
|
| 887 |
+
∥v∥W 2,q(Ω;R3) ≤ c(q, ̺, ϑ)
|
| 888 |
+
�
|
| 889 |
+
∥∇x̺∥Lq(Ω;R3) + ∥∇xϑ∥Lq(Ω;R3)
|
| 890 |
+
�
|
| 891 |
+
, 1 < q < ∞.
|
| 892 |
+
(4.25)
|
| 893 |
+
Similarly,
|
| 894 |
+
∥w∥W 2,2(Ω;R3) ≤ c(T, D0, ̺, ϑ, u)
|
| 895 |
+
�
|
| 896 |
+
1 + ∥√̺∂tu∥L2(Ω;R3) + ∥∇xu∥L2(Ω;R3×3)
|
| 897 |
+
�
|
| 898 |
+
.
|
| 899 |
+
(4.26)
|
| 900 |
+
The estimates (4.24)–(4.26) are uniform in the time interval [0, T).
|
| 901 |
+
4.4
|
| 902 |
+
Temperature estimates
|
| 903 |
+
Similarly to Fang, Zi, Zhang [5, Section 3, Lemma 3.4] we multiply the internal energy equation
|
| 904 |
+
(1.3) on ∂tϑ and integrate over Ω obtaining
|
| 905 |
+
cv
|
| 906 |
+
�
|
| 907 |
+
Ω
|
| 908 |
+
̺|Dtϑ|2 dx + κ
|
| 909 |
+
2
|
| 910 |
+
d
|
| 911 |
+
dt
|
| 912 |
+
�
|
| 913 |
+
Ω
|
| 914 |
+
|∇xϑ|2 dx
|
| 915 |
+
= cv
|
| 916 |
+
�
|
| 917 |
+
Ω
|
| 918 |
+
̺Dtϑ u · ∇xϑ dx −
|
| 919 |
+
�
|
| 920 |
+
Ω
|
| 921 |
+
̺ϑ divxu Dtϑ dx +
|
| 922 |
+
�
|
| 923 |
+
Ω
|
| 924 |
+
̺ϑ divxu u · ∇xϑ dx
|
| 925 |
+
+ d
|
| 926 |
+
dt
|
| 927 |
+
�
|
| 928 |
+
Ω
|
| 929 |
+
ϑ S(Dxu) : ∇xu dx
|
| 930 |
+
− µ
|
| 931 |
+
�
|
| 932 |
+
Ω
|
| 933 |
+
ϑ
|
| 934 |
+
�
|
| 935 |
+
∇xu + ∇t
|
| 936 |
+
xu − 2
|
| 937 |
+
3divxuI
|
| 938 |
+
�
|
| 939 |
+
:
|
| 940 |
+
�
|
| 941 |
+
∇x∂tu + ∇t
|
| 942 |
+
x∂tu − 2
|
| 943 |
+
3divx∂tuI
|
| 944 |
+
�
|
| 945 |
+
dx
|
| 946 |
+
− 2η
|
| 947 |
+
�
|
| 948 |
+
Ω
|
| 949 |
+
ϑ divxu divx∂tu dx.
|
| 950 |
+
(4.27)
|
| 951 |
+
Indeed the term involving the boundary integral is handled as
|
| 952 |
+
−κ
|
| 953 |
+
�
|
| 954 |
+
Ω
|
| 955 |
+
∆xϑ ∂tϑ dx = −κ
|
| 956 |
+
�
|
| 957 |
+
∂Ω
|
| 958 |
+
∂tϑB∇xϑ · n dSx + κ
|
| 959 |
+
2
|
| 960 |
+
d
|
| 961 |
+
dt
|
| 962 |
+
�
|
| 963 |
+
Ω
|
| 964 |
+
|∇xϑ|2 dx,
|
| 965 |
+
where
|
| 966 |
+
�
|
| 967 |
+
∂Ω
|
| 968 |
+
∂tϑB∇xϑ · n dSx = 0
|
| 969 |
+
12
|
| 970 |
+
|
| 971 |
+
as the boundary temperature is independent of t.
|
| 972 |
+
Similarly to Fang, Zi, Zhang [5, Section 3, Lemma 3.4], we have to show that the intergrals
|
| 973 |
+
�
|
| 974 |
+
Ω
|
| 975 |
+
ϑ ∇xu : ∇x∂tu dx,
|
| 976 |
+
�
|
| 977 |
+
Ω
|
| 978 |
+
ϑ ∇xu : ∇t
|
| 979 |
+
x∂tu dx, and
|
| 980 |
+
�
|
| 981 |
+
Ω
|
| 982 |
+
ϑ divxu divx∂tu dx
|
| 983 |
+
can be rewritten in the form compatible with (4.19), meaning with the time derivatives replaced
|
| 984 |
+
by material derivatives. Fortunately, this step can be carried out in the present setting using only
|
| 985 |
+
the boundary condition u · n|∂Ω = 0. Indeed we get
|
| 986 |
+
�
|
| 987 |
+
Ω
|
| 988 |
+
ϑ ∇xu : ∇x∂tu dx =
|
| 989 |
+
�
|
| 990 |
+
Ω
|
| 991 |
+
ϑ ∇xu : ∇x(Dtu) dx −
|
| 992 |
+
�
|
| 993 |
+
Ω
|
| 994 |
+
ϑ ∇xu : ∇x(u · ∇xu) dx,
|
| 995 |
+
where
|
| 996 |
+
�
|
| 997 |
+
Ω
|
| 998 |
+
ϑ ∇xu : ∇x(u · ∇xu) dx
|
| 999 |
+
=
|
| 1000 |
+
�
|
| 1001 |
+
Ω
|
| 1002 |
+
ϑ ∇xu : (∇xu · ∇xu) dx + 1
|
| 1003 |
+
2
|
| 1004 |
+
�
|
| 1005 |
+
Ω
|
| 1006 |
+
ϑ u · ∇x|∇xu|2 dx
|
| 1007 |
+
=
|
| 1008 |
+
�
|
| 1009 |
+
Ω
|
| 1010 |
+
ϑ ∇xu : (∇xu · ∇xu) dx − 1
|
| 1011 |
+
2
|
| 1012 |
+
�
|
| 1013 |
+
Ω
|
| 1014 |
+
|∇xu|2 ∇xϑ · u dx − 1
|
| 1015 |
+
2
|
| 1016 |
+
�
|
| 1017 |
+
Ω
|
| 1018 |
+
|∇xu|2 ϑdivxu dx.
|
| 1019 |
+
Similarly,
|
| 1020 |
+
�
|
| 1021 |
+
Ω
|
| 1022 |
+
ϑ ∇xu : ∇t
|
| 1023 |
+
x∂tu dx =
|
| 1024 |
+
�
|
| 1025 |
+
Ω
|
| 1026 |
+
ϑ ∇xu : ∇t
|
| 1027 |
+
x(Dtu) dx −
|
| 1028 |
+
�
|
| 1029 |
+
Ω
|
| 1030 |
+
ϑ ∇xu : ∇t
|
| 1031 |
+
x(u · ∇xu) dx,
|
| 1032 |
+
where
|
| 1033 |
+
�
|
| 1034 |
+
Ω
|
| 1035 |
+
ϑ ∇xu : ∇t
|
| 1036 |
+
x(u · ∇xu) dx
|
| 1037 |
+
=
|
| 1038 |
+
�
|
| 1039 |
+
Ω
|
| 1040 |
+
ϑ ∇xu : (∇t
|
| 1041 |
+
xu · ∇t
|
| 1042 |
+
xu) dx + 1
|
| 1043 |
+
2
|
| 1044 |
+
�
|
| 1045 |
+
Ω
|
| 1046 |
+
ϑ u · ∇x(∇xu : ∇t
|
| 1047 |
+
xu) dx
|
| 1048 |
+
=
|
| 1049 |
+
�
|
| 1050 |
+
Ω
|
| 1051 |
+
ϑ ∇xu : (∇t
|
| 1052 |
+
xu · ∇t
|
| 1053 |
+
xu) dx − 1
|
| 1054 |
+
2
|
| 1055 |
+
�
|
| 1056 |
+
Ω
|
| 1057 |
+
(∇xu : ∇t
|
| 1058 |
+
xu) ∇xϑ · u dx − 1
|
| 1059 |
+
2
|
| 1060 |
+
�
|
| 1061 |
+
Ω
|
| 1062 |
+
(∇xu : ∇t
|
| 1063 |
+
xu) ϑdivxu dx.
|
| 1064 |
+
Finally,
|
| 1065 |
+
�
|
| 1066 |
+
Ω
|
| 1067 |
+
ϑ divxu divx∂tu dx =
|
| 1068 |
+
�
|
| 1069 |
+
Ω
|
| 1070 |
+
ϑ divxu divxDtu dx −
|
| 1071 |
+
�
|
| 1072 |
+
Ω
|
| 1073 |
+
ϑ divxu divx(u · ∇xu) dx,
|
| 1074 |
+
where
|
| 1075 |
+
�
|
| 1076 |
+
Ω
|
| 1077 |
+
ϑ divxu divx(u · ∇xu) dx
|
| 1078 |
+
13
|
| 1079 |
+
|
| 1080 |
+
=
|
| 1081 |
+
�
|
| 1082 |
+
Ω
|
| 1083 |
+
ϑ divxu (∇xu : ∇t
|
| 1084 |
+
xu) dx + 1
|
| 1085 |
+
2
|
| 1086 |
+
�
|
| 1087 |
+
Ω
|
| 1088 |
+
ϑu · ∇x|divxu|2 dx
|
| 1089 |
+
=
|
| 1090 |
+
�
|
| 1091 |
+
Ω
|
| 1092 |
+
ϑ divxu (∇xu : ∇t
|
| 1093 |
+
xu) dx − 1
|
| 1094 |
+
2
|
| 1095 |
+
�
|
| 1096 |
+
Ω
|
| 1097 |
+
|divxu|2 ∇xϑ · u dx − 1
|
| 1098 |
+
2
|
| 1099 |
+
�
|
| 1100 |
+
Ω
|
| 1101 |
+
|divxu|2 ϑdivxu dx.
|
| 1102 |
+
We conclude, using (4.7), (4.19), and (4.27),
|
| 1103 |
+
�
|
| 1104 |
+
Ω
|
| 1105 |
+
|∇xϑ|2(τ, ·) dx +
|
| 1106 |
+
� τ
|
| 1107 |
+
0
|
| 1108 |
+
�
|
| 1109 |
+
Ω
|
| 1110 |
+
̺|Dtϑ|2 dx dt
|
| 1111 |
+
≤ c(T, D0, ̺, ϑ, u)
|
| 1112 |
+
�
|
| 1113 |
+
1 +
|
| 1114 |
+
� τ
|
| 1115 |
+
0
|
| 1116 |
+
�
|
| 1117 |
+
Ω
|
| 1118 |
+
|∇xu|4 dx dt
|
| 1119 |
+
�
|
| 1120 |
+
.
|
| 1121 |
+
(4.28)
|
| 1122 |
+
Next, by virtue of the decomposition u = v + w and the bound (4.24),
|
| 1123 |
+
�
|
| 1124 |
+
Ω
|
| 1125 |
+
|∇xu|4 dx
|
| 1126 |
+
<∼
|
| 1127 |
+
�
|
| 1128 |
+
Ω
|
| 1129 |
+
|∇xv|4 dx +
|
| 1130 |
+
�
|
| 1131 |
+
Ω
|
| 1132 |
+
|∇xw|4 dx ≤ c(T, D0, ̺, ϑ, u)
|
| 1133 |
+
�
|
| 1134 |
+
1 +
|
| 1135 |
+
�
|
| 1136 |
+
Ω
|
| 1137 |
+
|∇xw|4 dx
|
| 1138 |
+
�
|
| 1139 |
+
,
|
| 1140 |
+
(4.29)
|
| 1141 |
+
and, similarly,
|
| 1142 |
+
∥w∥L∞(Ω;R3) ≤ ∥u∥L∞(Ω;R3) + ∥v∥L∞(Ω;R3) ≤ c(T, D0, ̺, ϑ, u).
|
| 1143 |
+
(4.30)
|
| 1144 |
+
Recalling the Gagliardo–Nirenberg interpolation inequality in the form
|
| 1145 |
+
∥∇xU∥2
|
| 1146 |
+
L4(Ω;R3) ≤ ∥U∥L∞(Ω)∥∆xU∥L2(Ω) whenever U|∂Ω = 0,
|
| 1147 |
+
(4.31)
|
| 1148 |
+
we may use (4.29), (4.30) to rewrite (4.28) in the form
|
| 1149 |
+
�
|
| 1150 |
+
Ω
|
| 1151 |
+
|∇xϑ|2(τ, ·) dx +
|
| 1152 |
+
� τ
|
| 1153 |
+
0
|
| 1154 |
+
�
|
| 1155 |
+
Ω
|
| 1156 |
+
̺|Dtϑ|2 dx dt
|
| 1157 |
+
≤ c(T, D0, ̺, ϑ, u)
|
| 1158 |
+
�
|
| 1159 |
+
1 +
|
| 1160 |
+
� τ
|
| 1161 |
+
0
|
| 1162 |
+
�
|
| 1163 |
+
Ω
|
| 1164 |
+
|∇xϑ|2 dx dt +
|
| 1165 |
+
� τ
|
| 1166 |
+
0
|
| 1167 |
+
∥w∥2
|
| 1168 |
+
W 2,2(Ω;R3) dt
|
| 1169 |
+
�
|
| 1170 |
+
.
|
| 1171 |
+
(4.32)
|
| 1172 |
+
Finally, we use the elliptic estimates (4.26) to conclude
|
| 1173 |
+
�
|
| 1174 |
+
Ω
|
| 1175 |
+
|∇xϑ|2(τ, ·) dx +
|
| 1176 |
+
� τ
|
| 1177 |
+
0
|
| 1178 |
+
�
|
| 1179 |
+
Ω
|
| 1180 |
+
̺|Dtϑ|2 dx dt
|
| 1181 |
+
≤ c(T, D0, ̺, ϑ, u)
|
| 1182 |
+
�
|
| 1183 |
+
1 +
|
| 1184 |
+
� τ
|
| 1185 |
+
0
|
| 1186 |
+
�
|
| 1187 |
+
Ω
|
| 1188 |
+
�
|
| 1189 |
+
|∇xϑ|2 + |∇xu|2�
|
| 1190 |
+
dx dt +
|
| 1191 |
+
� τ
|
| 1192 |
+
0
|
| 1193 |
+
∥√̺∂tu∥2
|
| 1194 |
+
L2(Ω;R3) dt
|
| 1195 |
+
�
|
| 1196 |
+
.
|
| 1197 |
+
(4.33)
|
| 1198 |
+
Summing up (4.7), (4.19), and (4.33) we may apply Gronwall’s lemma to obtain the following
|
| 1199 |
+
bounds:
|
| 1200 |
+
sup
|
| 1201 |
+
t∈[0,T)
|
| 1202 |
+
∥u(t, ·)∥W 1,2(Ω;R3) ≤ c(T, D0, ̺, ϑ, u),
|
| 1203 |
+
(4.34)
|
| 1204 |
+
sup
|
| 1205 |
+
t∈[0,T)
|
| 1206 |
+
∥√̺Dtu(t, ·)∥L2(Ω;R3) ≤ c(T, D0, ̺, ϑ, u),
|
| 1207 |
+
(4.35)
|
| 1208 |
+
14
|
| 1209 |
+
|
| 1210 |
+
sup
|
| 1211 |
+
t∈[0,T)
|
| 1212 |
+
∥ϑ(t, ·)∥W 1,2(Ω) ≤ c(T, D0, ̺, ϑ, u),
|
| 1213 |
+
(4.36)
|
| 1214 |
+
� T
|
| 1215 |
+
0
|
| 1216 |
+
�
|
| 1217 |
+
Ω
|
| 1218 |
+
|∇xDtu|2 dx dt ≤ c(T, D0, ̺, ϑ, u),
|
| 1219 |
+
(4.37)
|
| 1220 |
+
� T
|
| 1221 |
+
0
|
| 1222 |
+
�
|
| 1223 |
+
Ω
|
| 1224 |
+
̺|Dtϑ|2 dx dt ≤ c(T, D0, ̺, ϑ, u).
|
| 1225 |
+
(4.38)
|
| 1226 |
+
Moreover, it follows from (4.24), (4.31), (4.35)
|
| 1227 |
+
sup
|
| 1228 |
+
t∈[0,T)
|
| 1229 |
+
∥∇xu(t, ·)∥L4(Ω;R3×3) ≤ c(T, D0, ̺, ϑ, u).
|
| 1230 |
+
(4.39)
|
| 1231 |
+
In addition, (4.38), (4.39) and the standard parabolic estimates applied to the internal energy
|
| 1232 |
+
balance (1.3) yield
|
| 1233 |
+
� T
|
| 1234 |
+
0
|
| 1235 |
+
∥ϑ∥2
|
| 1236 |
+
W 2,2(Ω) dt ≤ c(T, D0, ̺, ϑ, u).
|
| 1237 |
+
(4.40)
|
| 1238 |
+
5
|
| 1239 |
+
Second energy bound
|
| 1240 |
+
It follows from (4.26), (4.35) that
|
| 1241 |
+
sup
|
| 1242 |
+
t∈[0,T)
|
| 1243 |
+
∥w(t, ·)∥W 2,2(Ω;R3) ≤ c(T, D0, ̺, ϑ, u);
|
| 1244 |
+
(5.1)
|
| 1245 |
+
whence, by virtue of (4.24) and Sobolev embedding W 1,2(Ω) ֒→ L6(Ω),
|
| 1246 |
+
sup
|
| 1247 |
+
t∈[0,T)
|
| 1248 |
+
∥∇xu(t, ·)∥2
|
| 1249 |
+
L6(Ω;R3×3) ≤ c(T, D0, ̺, ϑ, u).
|
| 1250 |
+
(5.2)
|
| 1251 |
+
Moreover, as a consequence of (4.37), Dtu is bounded in L2(L6), which, combined with (5.2), gives
|
| 1252 |
+
rise to
|
| 1253 |
+
� T
|
| 1254 |
+
0
|
| 1255 |
+
∥∂tu∥2
|
| 1256 |
+
L6(Ω;R3) dt ≤ c(T, D0, ̺, ϑ, u).
|
| 1257 |
+
(5.3)
|
| 1258 |
+
Finally, going back to (4.22) we conclude
|
| 1259 |
+
� T
|
| 1260 |
+
0
|
| 1261 |
+
∥w∥2
|
| 1262 |
+
W 2,6(Ω;R3) dt ≤ c(T, D0, ̺, ϑ, u),
|
| 1263 |
+
(5.4)
|
| 1264 |
+
and
|
| 1265 |
+
� T
|
| 1266 |
+
0
|
| 1267 |
+
∥u∥2
|
| 1268 |
+
W 1,q(Ω;R3) dt ≤ c(T, D0, ̺, ϑ, u, q) for any 1 ≤ q < ∞.
|
| 1269 |
+
(5.5)
|
| 1270 |
+
15
|
| 1271 |
+
|
| 1272 |
+
6
|
| 1273 |
+
Estimates of the derivatives of the density
|
| 1274 |
+
Using (5.4), (5.5), we may proceed as in [19, Section 5] to deduce the bounds
|
| 1275 |
+
supt∈[0,T)
|
| 1276 |
+
�
|
| 1277 |
+
∥∂t̺(t, ·)∥L6(Ω) + ∥̺(t, ·)∥W 1,6(Ω)
|
| 1278 |
+
�
|
| 1279 |
+
≤ c(T, D0, ̺, ϑ, u).
|
| 1280 |
+
(6.1)
|
| 1281 |
+
Revisiting the momentum equation (1.2) we use (6.1) together with the other bounds established
|
| 1282 |
+
above to obtain
|
| 1283 |
+
� T
|
| 1284 |
+
0
|
| 1285 |
+
∥u∥2
|
| 1286 |
+
W 2,6(Ω;R3) dt ≤ c(T, D0, ̺, ϑ, u).
|
| 1287 |
+
(6.2)
|
| 1288 |
+
6.1
|
| 1289 |
+
Positivity of the density and temperature
|
| 1290 |
+
It follows from (6.2) that divxu is bounded in L1(0, T; L∞(Ω)). Thus the equation of continuity
|
| 1291 |
+
(1.1) yields a positive lower bound on the density
|
| 1292 |
+
inf
|
| 1293 |
+
(t,x)∈[0,T)×Ω ̺(t, x) ≥ ̺ > 0,
|
| 1294 |
+
(6.3)
|
| 1295 |
+
where the lower bound depends on the data as well as on the length T of the time interval.
|
| 1296 |
+
Similarly, rewriting the internal energy balance equation (1.3) in the form
|
| 1297 |
+
cv (∂tϑ + u · ∇xϑ) − κ
|
| 1298 |
+
̺∆xϑ = 1
|
| 1299 |
+
̺S : Dxu − ϑdivxu
|
| 1300 |
+
(6.4)
|
| 1301 |
+
we may apply the standard parabolic maximum/minimum principle to deduce
|
| 1302 |
+
inf
|
| 1303 |
+
(t,x)∈[0,T)×Ω ϑ(t, x) ≥ ϑ > 0.
|
| 1304 |
+
(6.5)
|
| 1305 |
+
7
|
| 1306 |
+
Parabolic regularity for the heat equation
|
| 1307 |
+
We rewrite the parabolic equation (6.4) in terms of Θ = ϑ − ϑB. Recalling ∆xϑB = 0 we get
|
| 1308 |
+
cv (∂tΘ + u · ∇xϑ) − κ
|
| 1309 |
+
̺∆xΘ = 1
|
| 1310 |
+
̺S : Dxu − ϑdivxu
|
| 1311 |
+
(7.1)
|
| 1312 |
+
with the homogeneous Dirichlet boundary conditions
|
| 1313 |
+
Θ|∂Ω = 0.
|
| 1314 |
+
(7.2)
|
| 1315 |
+
Now, we can apply all arguments of [10, Sections 4.6, 4.7] to Θ obtaining the bounds
|
| 1316 |
+
∥ϑ∥Cα([0,T]×Ω) ≤ c(T, D0, ̺, ϑ, u) for some α > 0,
|
| 1317 |
+
(7.3)
|
| 1318 |
+
∥ϑ∥Lp(0,T;W 2,3(Ω)) + ∥∂tϑ∥Lp(0,T;L3(Ω)) ≤ c(T, D0, ̺, ϑ, u) for all 1 ≤ p < ∞,
|
| 1319 |
+
(7.4)
|
| 1320 |
+
together with
|
| 1321 |
+
∥u∥Lp(0,T;W 2,6(Ω;R3)) + ∥∂tu∥Lp(0,T;L6(Ω;R3)) ≤ c(T, D0, ̺, ϑ, u) for any 1 ≤ p < ∞.
|
| 1322 |
+
(7.5)
|
| 1323 |
+
16
|
| 1324 |
+
|
| 1325 |
+
8
|
| 1326 |
+
Final estimates
|
| 1327 |
+
The bounds (7.5) imply, in particular,
|
| 1328 |
+
sup
|
| 1329 |
+
(t,x)∈[0,T)×Ω
|
| 1330 |
+
|∇xu(t, x)| ≤ c(T, D0, ̺, ϑ, u).
|
| 1331 |
+
(8.1)
|
| 1332 |
+
Thus the desired higher order estimates can be obtained exactly as in [9, Section 4.6]. Indeed
|
| 1333 |
+
the arguments of [9, Section 4.6] are based on differentiating the equation (7.1) with respect to
|
| 1334 |
+
time which gives rise to a parabolic problem for ∂tϑ with the homogeneous Dirichlet boundary
|
| 1335 |
+
conditions ∂tϑ|∂Ω = 0. Indeed we get
|
| 1336 |
+
cv∂2
|
| 1337 |
+
ttϑ + cvu · ∇x∂tϑ − κ
|
| 1338 |
+
̺∆x∂tϑ =−cv∂tu · ∇xϑ − 1
|
| 1339 |
+
̺2∂t̺ (κ∆xϑ + S(Dxu) : Dxu)
|
| 1340 |
+
+2
|
| 1341 |
+
̺ S(Dxu) : Dx∂tu − ∂tϑ divxu − ϑ divx∂tu.
|
| 1342 |
+
The estimates obtained in the previous sections imply that the right–hand side of the above
|
| 1343 |
+
equation is bounded in L2(0, T; L2(Ω)). Thus multiplying the equation on ∆x∂tϑ and performing
|
| 1344 |
+
the standard by parts integration, we get the desired estimates as in [9, Section 4.6].
|
| 1345 |
+
The remaining estimates are obtained exactly as in [9, Section 4.6] :
|
| 1346 |
+
sup
|
| 1347 |
+
t∈[0,T)
|
| 1348 |
+
∥ϑ(t, ·)∥W 3,2(Ω) + sup
|
| 1349 |
+
t∈[0,T)
|
| 1350 |
+
∥∂tϑ(t, ·)∥W 1,2(Ω) ≤ c(T, D0, ̺, ϑ, u),
|
| 1351 |
+
(8.2)
|
| 1352 |
+
� T
|
| 1353 |
+
0
|
| 1354 |
+
�
|
| 1355 |
+
∥∂tϑ∥2
|
| 1356 |
+
W 2,2(Ω) + ∥ϑ∥2
|
| 1357 |
+
W 4,2(Ω)
|
| 1358 |
+
�
|
| 1359 |
+
dt ≤ c(T, D0, ̺, ϑ, u),
|
| 1360 |
+
(8.3)
|
| 1361 |
+
sup
|
| 1362 |
+
t∈[0,T)
|
| 1363 |
+
∥u(t, ·)∥W 3,2(Ω;R3) + sup
|
| 1364 |
+
t∈[0,T)
|
| 1365 |
+
∥∂tu(t, ·)∥W 1,2(Ω;R3) ≤ c(T, D0, ̺, ϑ, u),
|
| 1366 |
+
(8.4)
|
| 1367 |
+
� T
|
| 1368 |
+
0
|
| 1369 |
+
�
|
| 1370 |
+
∥∂tu∥2
|
| 1371 |
+
W 2,2(Ω;R3) + ∥u∥2
|
| 1372 |
+
W 4,2(Ω;R3)
|
| 1373 |
+
�
|
| 1374 |
+
dt ≤ c(T, D0, ̺, ϑ, u),
|
| 1375 |
+
(8.5)
|
| 1376 |
+
and
|
| 1377 |
+
sup
|
| 1378 |
+
t∈[0,T)
|
| 1379 |
+
∥̺(t, ·)∥W 3,2(Ω) ≤ c(T, D0, ̺, ϑ, u).
|
| 1380 |
+
(8.6)
|
| 1381 |
+
We have completed the proof of Proposition 2.3.
|
| 1382 |
+
References
|
| 1383 |
+
[1] N. Chaudhuri. On weak(measure valued)–strong uniqueness for Navier–Stokes–Fourier system
|
| 1384 |
+
with Dirichlet boundary condition.
|
| 1385 |
+
Archive Preprint Series, 2022.
|
| 1386 |
+
arxiv preprint No.
|
| 1387 |
+
2207.00991.
|
| 1388 |
+
[2] N. Chaudhuri and E. Feireisl. Navier-Stokes-Fourier system with Dirichlet boundary condi-
|
| 1389 |
+
tions. Appl. Anal., 101(12):4076–4094, 2022.
|
| 1390 |
+
17
|
| 1391 |
+
|
| 1392 |
+
[3] P. A. Davidson. Turbulence:An introduction for scientists and engineers. Oxford University
|
| 1393 |
+
Press, Oxford, 2004.
|
| 1394 |
+
[4] J. Fan, S. Jiang, and Y. Ou. A blow-up criterion for compressible viscous heat-conductive
|
| 1395 |
+
flows. Ann. Inst. H. Poincar´e Anal. Non Lin´eaire, 27(1):337–350, 2010.
|
| 1396 |
+
[5] D. Fang, R. Zi, and T. Zhang. A blow-up criterion for two dimensional compressible viscous
|
| 1397 |
+
heat-conductive flows. Nonlinear Anal., 75(6):3130–3141, 2012.
|
| 1398 |
+
[6] E. Feireisl, M. Luk´aˇcov´a-Medviˇdov´a, H. Mizerov´a, and B. She. Numerical analysis of com-
|
| 1399 |
+
pressible fluid flows. Springer-Verlag, Cham, 2022.
|
| 1400 |
+
[7] E. Feireisl and M. Luk´aˇcov´a-Medviˇdov´a. Convergence of a stochastic collocation finite volume
|
| 1401 |
+
method for the compressible Navier–Stokes system. Archive Preprint Series, 2021. arxiv
|
| 1402 |
+
preprint No.2111.07435.
|
| 1403 |
+
[8] E. Feireisl and M. Luk´aˇcov´a-Medviˇdov´a. Statistical solutions for the Navier–Stokes–Fourier
|
| 1404 |
+
system. Archive Preprint Series, 2022. arxiv preprint No. 2212.06784.
|
| 1405 |
+
[9] E. Feireisl, A. Novotn´y, and Y. Sun. A regularity criterion for the weak solutions to the
|
| 1406 |
+
Navier-Stokes-Fourier system. Arch. Ration. Mech. Anal., 212(1):219–239, 2014.
|
| 1407 |
+
[10] E. Feireisl and Y. Sun. Conditional regularity of very weak solutions to the Navier-Stokes-
|
| 1408 |
+
Fourier system.
|
| 1409 |
+
In Recent advances in partial differential equations and applications, vol-
|
| 1410 |
+
ume 666 of Contemp. Math., pages 179–199. Amer. Math. Soc., Providence, RI, 2016.
|
| 1411 |
+
[11] E. Feireisl, H. Wen, and C. Zhu. On Nash’s conjecture for models of viscous, compressible,
|
| 1412 |
+
and heat conducting fluids. IM ASCR Prague, preprint No. IM 2022 6, 2022.
|
| 1413 |
+
[12] D. Hoff. Global solutions of the Navier-Stokes equations for multidimensional compressible
|
| 1414 |
+
flow with discontinuous initial data. J. Differential Equations, 120:215–254, 1995.
|
| 1415 |
+
[13] X. Huang and J. Li. Serrin-type blowup criterion for viscous, compressible, and heat conduct-
|
| 1416 |
+
ing Navier-Stokes and magnetohydrodynamic flows. Comm. Math. Phys., 324(1):147–171,
|
| 1417 |
+
2013.
|
| 1418 |
+
[14] X. Huang, J. Li, and Y. Wang. Serrin-type blowup criterion for full compressible Navier-Stokes
|
| 1419 |
+
system. Arch. Ration. Mech. Anal., 207(1):303–316, 2013.
|
| 1420 |
+
[15] Q. Jiu, Y. Wang, and Y. Ye. Refined blow-up criteria for the full compressible Navier-Stokes
|
| 1421 |
+
equations involving temperature. J. Evol. Equ., 21(2):1895–1916, 2021.
|
| 1422 |
+
[16] F. Merle, P. Rapha¨el, I. Rodnianski, and J. Szeftel. On the implosion of a compressible fluid
|
| 1423 |
+
I: smooth self-similar inviscid profiles. Ann. of Math. (2), 196(2):567–778, 2022.
|
| 1424 |
+
18
|
| 1425 |
+
|
| 1426 |
+
[17] F. Merle, P. Rapha¨el, I. Rodnianski, and J. Szeftel. On the implosion of a compressible fluid
|
| 1427 |
+
II: singularity formation. Ann. of Math. (2), 196(2):779–889, 2022.
|
| 1428 |
+
[18] Y. Sun, C. Wang, and Z. Zhang. A Beale-Kato-Majda criterion for the 3-D compressible
|
| 1429 |
+
Navier-Stokes equations. J. Math. Pures Appl., 95(1):36–47, 2011.
|
| 1430 |
+
[19] Y. Sun, C. Wang, and Z. Zhang. A Beale-Kato-Majda criterion for three dimensional com-
|
| 1431 |
+
pressible viscous heat-conductive flows. Arch. Ration. Mech. Anal., 201(2):727–742, 2011.
|
| 1432 |
+
[20] A. Valli. A correction to the paper: “An existence theorem for compressible viscous fluids”
|
| 1433 |
+
[Ann. Mat. Pura Appl. (4) 130 (1982), 197–213; MR 83h:35112]. Ann. Mat. Pura Appl. (4),
|
| 1434 |
+
132:399–400 (1983), 1982.
|
| 1435 |
+
[21] A. Valli. An existence theorem for compressible viscous fluids. Ann. Mat. Pura Appl. (4),
|
| 1436 |
+
130:197–213, 1982.
|
| 1437 |
+
[22] A. Valli and M. Zajaczkowski. Navier-Stokes equations for compressible fluids: Global exis-
|
| 1438 |
+
tence and qualitative properties of the solutions in the general case. Commun. Math. Phys.,
|
| 1439 |
+
103:259–296, 1986.
|
| 1440 |
+
[23] H. Wen and C. Zhu. Blow-up criterions of strong solutions to 3D compressible Navier-Stokes
|
| 1441 |
+
equations with vacuum. Adv. Math., 248:534–572, 2013.
|
| 1442 |
+
[24] H. Wen and C. Zhu.
|
| 1443 |
+
Global solutions to the three-dimensional full compressible Navier-
|
| 1444 |
+
Stokes equations with vacuum at infinity in some classes of large data. SIAM J. Math. Anal.,
|
| 1445 |
+
49(1):162–221, 2017.
|
| 1446 |
+
19
|
| 1447 |
+
|
QNFRT4oBgHgl3EQfJje8/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
QtFJT4oBgHgl3EQfJyy0/content/tmp_files/2301.11462v1.pdf.txt
ADDED
|
@@ -0,0 +1,2310 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
How poor is the stimulus? Evaluating hierarchical generalization in
|
| 2 |
+
neural networks trained on child-directed speech
|
| 3 |
+
Aditya Yedetore∗1, Tal Linzen2, Robert Frank3, R. Thomas McCoy∗4
|
| 4 |
+
1Boston University, 2New York University, 3Yale University, 4Princeton University
|
| 5 |
+
yedetore@bu.edu, linzen@nyu.edu, robert.frank@yale.edu,
|
| 6 |
+
tom.mccoy@princeton.edu
|
| 7 |
+
Abstract
|
| 8 |
+
When acquiring syntax, children consistently
|
| 9 |
+
choose hierarchical rules over competing non-
|
| 10 |
+
hierarchical possibilities.
|
| 11 |
+
Is this preference
|
| 12 |
+
due to a learning bias for hierarchical struc-
|
| 13 |
+
ture, or due to more general biases that in-
|
| 14 |
+
teract with hierarchical cues in children’s lin-
|
| 15 |
+
guistic input?
|
| 16 |
+
We explore these possibili-
|
| 17 |
+
ties by training LSTMs and Transformers—
|
| 18 |
+
two types of neural networks without a hi-
|
| 19 |
+
erarchical bias—on data similar in quantity
|
| 20 |
+
and content to children’s linguistic input: text
|
| 21 |
+
from the CHILDES corpus. We then evaluate
|
| 22 |
+
what these models have learned about English
|
| 23 |
+
yes/no questions, a phenomenon for which hi-
|
| 24 |
+
erarchical structure is crucial.
|
| 25 |
+
We find that,
|
| 26 |
+
though they perform well at capturing the sur-
|
| 27 |
+
face statistics of child-directed speech (as mea-
|
| 28 |
+
sured by perplexity), both model types general-
|
| 29 |
+
ize in a way more consistent with an incorrect
|
| 30 |
+
linear rule than the correct hierarchical rule.
|
| 31 |
+
These results suggest that human-like general-
|
| 32 |
+
ization from text alone requires stronger biases
|
| 33 |
+
than the general sequence-processing biases of
|
| 34 |
+
standard neural network architectures.
|
| 35 |
+
1
|
| 36 |
+
Introduction
|
| 37 |
+
Syntax is driven by hierarchical structure, yet we
|
| 38 |
+
typically encounter sentences as linear sequences
|
| 39 |
+
of words. How do children come to recognize the
|
| 40 |
+
hierarchical nature of the languages they acquire?
|
| 41 |
+
Some argue that humans must have a hierarchical
|
| 42 |
+
inductive bias—an innate predisposition for hierar-
|
| 43 |
+
chical structure (Chomsky, 1965, 1980). An alter-
|
| 44 |
+
native view (e.g., Lewis and Elman, 2001) is that
|
| 45 |
+
no such bias is necessary: there may be clear evi-
|
| 46 |
+
dence for hierarchical structure in children’s input,
|
| 47 |
+
so that children would choose hierarchical rules
|
| 48 |
+
even without a hierarchical bias.
|
| 49 |
+
∗ Work done while at Johns Hopkins University.
|
| 50 |
+
At first blush, recent work in natural language
|
| 51 |
+
processing (NLP) may seem to indicate that no hier-
|
| 52 |
+
archical bias is necessary. Neural networks trained
|
| 53 |
+
on naturally-occurring text perform impressively
|
| 54 |
+
on syntactic evaluations even though they have no
|
| 55 |
+
explicit syntactic structure built into them (e.g., Gu-
|
| 56 |
+
lordava et al., 2018; Wilcox et al., 2018; Warstadt
|
| 57 |
+
et al., 2020a). However, these results do not pro-
|
| 58 |
+
vide strong evidence about the learning biases re-
|
| 59 |
+
quired to learn language from the data available
|
| 60 |
+
to humans because these models receive very dif-
|
| 61 |
+
ferent training data than humans do (Warstadt and
|
| 62 |
+
Bowman, 2022). First, NLP models are typically
|
| 63 |
+
trained on far more data than children receive, so
|
| 64 |
+
models have more opportunities to encounter rare
|
| 65 |
+
syntactic structures (Linzen, 2020). Second, most
|
| 66 |
+
training sets in NLP are built from Internet text
|
| 67 |
+
(e.g., Wikipedia), which differs qualitatively from
|
| 68 |
+
the utterances that children typically hear; e.g., sen-
|
| 69 |
+
tences in Wikipedia are on average 25 words long
|
| 70 |
+
(Yasseri et al., 2012), compared to 5 words for
|
| 71 |
+
sentences in the North American English subset
|
| 72 |
+
of the CHILDES corpus of child-directed speech
|
| 73 |
+
(MacWhinney, 2000).
|
| 74 |
+
In this work, to evaluate if neural networks with-
|
| 75 |
+
out a hierarchical bias generalize like children do,
|
| 76 |
+
we train models on text1 comparable to the sen-
|
| 77 |
+
tences in children’s linguistic input: English data
|
| 78 |
+
from CHILDES. We then analyze what they have
|
| 79 |
+
learned about the relationship between declarative
|
| 80 |
+
sentences, such as (1a), and their corresponding
|
| 81 |
+
yes/no questions, such as (1b):
|
| 82 |
+
(1)
|
| 83 |
+
a. Those are your checkers.
|
| 84 |
+
b. Are those your checkers?
|
| 85 |
+
Crucially, nearly all naturally-occurring yes/no
|
| 86 |
+
questions are consistent with two rules: one based
|
| 87 |
+
1Section 6.5 discusses other input types (e.g., visual input).
|
| 88 |
+
arXiv:2301.11462v1 [cs.CL] 26 Jan 2023
|
| 89 |
+
|
| 90 |
+
on hierarchical structure (2), and one based on lin-
|
| 91 |
+
ear order (3):2,3
|
| 92 |
+
(2)
|
| 93 |
+
HIERARCHICALQ: The auxiliary at the start
|
| 94 |
+
of a yes/no question corresponds to the main
|
| 95 |
+
auxiliary of the corresponding declarative.
|
| 96 |
+
(3)
|
| 97 |
+
LINEARQ: The auxiliary at the start of a
|
| 98 |
+
yes/no question corresponds to the first auxil-
|
| 99 |
+
iary of the corresponding declarative.
|
| 100 |
+
Despite the scarcity of evidence disambiguating
|
| 101 |
+
these rules, children reliably favor HIERARCHI-
|
| 102 |
+
CALQ (Crain and Nakayama, 1987), albeit with
|
| 103 |
+
occasional errors consistent with LINEARQ (Am-
|
| 104 |
+
bridge et al., 2008). Yes/no questions thus are a
|
| 105 |
+
prime candidate for an aspect of English syntax
|
| 106 |
+
for which human-like generalization requires a hi-
|
| 107 |
+
erarchical bias. We evaluate yes/no question per-
|
| 108 |
+
formance in LSTMs and Transformers, two neural-
|
| 109 |
+
network architectures that have no inherent hierar-
|
| 110 |
+
chical inductive bias (McCoy et al., 2020; Petty and
|
| 111 |
+
Frank, 2021). These architectures employ different
|
| 112 |
+
computational mechanisms, so consistent results
|
| 113 |
+
across both would indicate that our results are not
|
| 114 |
+
due to idiosyncrasies of one particular architecture.
|
| 115 |
+
To investigate if models generalize more con-
|
| 116 |
+
sistently with the hierarchical or linear rule, we
|
| 117 |
+
evaluate them on cases where the rules make dif-
|
| 118 |
+
ferent predictions, such as (4): under HIERARCHI-
|
| 119 |
+
CALQ, the question that corresponds to (4a) is (4b),
|
| 120 |
+
whereas under LINEARQ it is (4c).
|
| 121 |
+
(4)
|
| 122 |
+
a. The boy who has talked can read.
|
| 123 |
+
b. Can the boy who has talked
|
| 124 |
+
read?
|
| 125 |
+
c. *Has the boy who
|
| 126 |
+
talked can read?
|
| 127 |
+
We find that across several ways of framing the
|
| 128 |
+
learning task, models fail to learn HIERARCHI-
|
| 129 |
+
CALQ. Instead, they generalize in ways that de-
|
| 130 |
+
pend on linear order and on the identities of spe-
|
| 131 |
+
cific words. These results suggest that children’s
|
| 132 |
+
training data, if taken to be words alone, may not
|
| 133 |
+
contain enough hierarchical cues to encourage hier-
|
| 134 |
+
archical generalization in a learner without a hierar-
|
| 135 |
+
chical bias. Thus, explaining human acquisition of
|
| 136 |
+
syntax may require postulating that humans have
|
| 137 |
+
stronger inductive biases than those of LSTMs and
|
| 138 |
+
2In past work these rules have been framed as transforma-
|
| 139 |
+
tions named MOVE-FIRST and MOVE-MAIN (McCoy et al.,
|
| 140 |
+
2020). We instead follow Berwick et al. (2011) and frame the
|
| 141 |
+
child’s knowledge as a relationship between sentences.
|
| 142 |
+
3Though these two rules are the most prominent in prior
|
| 143 |
+
literature, other rules are possible; see Section 5.2.
|
| 144 |
+
Transformers, or that information other than word
|
| 145 |
+
sequences plays a crucial role.4
|
| 146 |
+
2
|
| 147 |
+
Background
|
| 148 |
+
Though HIERARCHICALQ and LINEARQ often
|
| 149 |
+
make the same predictions, the evidence in chil-
|
| 150 |
+
dren’s input may still favor HIERARCHICALQ.
|
| 151 |
+
The most straightforward evidence would be ut-
|
| 152 |
+
terances that directly disambiguate the rules, such
|
| 153 |
+
as (4b). Pullum and Scholz (2002) show that disam-
|
| 154 |
+
biguating examples appear in the Wall Street Jour-
|
| 155 |
+
nal, in literature, and arguably in child-directed
|
| 156 |
+
speech, but direct evidence may still be too rare to
|
| 157 |
+
robustly support HIERARCHICALQ (Legate and
|
| 158 |
+
Yang, 2002). Nonetheless, children might con-
|
| 159 |
+
clude that yes/no questions obey HIERARCHI-
|
| 160 |
+
CALQ rather than LINEARQ based on indirect
|
| 161 |
+
evidence—evidence that other syntactic phenom-
|
| 162 |
+
ena are hierarchical (Mulligan et al., 2021).
|
| 163 |
+
To test if the cues favoring HIERARCHICALQ
|
| 164 |
+
render a hierarchical bias unnecessary, we study
|
| 165 |
+
how well non-hierarchically-biased models acquire
|
| 166 |
+
English yes/no questions. Several prior papers have
|
| 167 |
+
used this approach, but their training data differed
|
| 168 |
+
from children’s input in important ways: some used
|
| 169 |
+
synthetic datasets (Lewis and Elman, 2001; Frank
|
| 170 |
+
and Mathis, 2007; Clark and Eyraud, 2007; McCoy
|
| 171 |
+
et al., 2020), others used massive Internet corpora
|
| 172 |
+
(Lin et al., 2019; Warstadt and Bowman, 2020),
|
| 173 |
+
and those that used child-directed speech simpli-
|
| 174 |
+
fied the data by replacing each word with its part
|
| 175 |
+
of speech (Perfors et al., 2011; Bod et al., 2012).
|
| 176 |
+
We used training data closer to children’s input,
|
| 177 |
+
namely sentences from CHILDES with word iden-
|
| 178 |
+
tities preserved, rather than being converted to parts
|
| 179 |
+
of speech. Two other recent works have also trained
|
| 180 |
+
neural networks on CHILDES data (Pannitto and
|
| 181 |
+
Herbelot, 2020; Huebner et al., 2021), but neither
|
| 182 |
+
investigated yes/no questions.
|
| 183 |
+
One particularly important reason for training
|
| 184 |
+
models on CHILDES is that, in prior work, differ-
|
| 185 |
+
ent types of training data have yielded diverging
|
| 186 |
+
results: Recent models trained on synthetic data
|
| 187 |
+
failed to properly acquire yes/no questions (McCoy
|
| 188 |
+
et al., 2020; Petty and Frank, 2021), whereas ones
|
| 189 |
+
trained on large Internet corpora scored well on
|
| 190 |
+
evaluations of yes/no questions (Lin et al., 2019;
|
| 191 |
+
Warstadt and Bowman, 2020). Given these differ-
|
| 192 |
+
ing results, it is not clear from past work how these
|
| 193 |
+
4Our datasets and models will be uploaded online soon to
|
| 194 |
+
facilitate further research.
|
| 195 |
+
|
| 196 |
+
models would generalize when faced with the type
|
| 197 |
+
of data that children receive.
|
| 198 |
+
3
|
| 199 |
+
Overview of Experimental Setup
|
| 200 |
+
We evaluated models on yes/no questions in two
|
| 201 |
+
ways. First, we used relative acceptability judg-
|
| 202 |
+
ments (Experiment 1): We trained neural networks
|
| 203 |
+
on the task of language modeling (predicting the
|
| 204 |
+
next word at every point in the sentence) and evalu-
|
| 205 |
+
ated whether they assigned a higher probability to
|
| 206 |
+
sentences consistent with LINEARQ or HIERAR-
|
| 207 |
+
CHICALQ. Our second approach was based on text
|
| 208 |
+
generation (Experiment 2): We trained networks
|
| 209 |
+
to take in a declarative sentence and output the
|
| 210 |
+
corresponding question, and tested whether they
|
| 211 |
+
generalized in a way more consistent with LIN-
|
| 212 |
+
EARQ or HIERARCHICALQ. Under both framings,
|
| 213 |
+
we trained models on data from CHILDES and
|
| 214 |
+
evaluated them on targeted datasets constructed to
|
| 215 |
+
differentiate LINEARQ and HIERARCHICALQ.
|
| 216 |
+
4
|
| 217 |
+
Experiment 1: Relative Acceptability
|
| 218 |
+
4.1
|
| 219 |
+
Dataset
|
| 220 |
+
To train models on data as similar as possible to
|
| 221 |
+
the sentences children receive, we extracted data
|
| 222 |
+
from CHILDES (MacWhinney, 2000). We used
|
| 223 |
+
the North American English portion. We wished
|
| 224 |
+
to replicate children’s input, so we excluded the
|
| 225 |
+
children’s own utterances, leaving a 9.6-million-
|
| 226 |
+
word corpus. We allocated 90% of the data to
|
| 227 |
+
training, 5% to validation, and 5% to testing. We
|
| 228 |
+
replaced words that appeared two or fewer times in
|
| 229 |
+
the training set with <unk>, giving a replacement
|
| 230 |
+
rate of 0.3%. See Appendix A for more details.
|
| 231 |
+
4.2
|
| 232 |
+
Task: Next-Word Prediction
|
| 233 |
+
We trained models on next-word prediction, also
|
| 234 |
+
known as language modeling. We chose this task
|
| 235 |
+
for two reasons. First, it is clear empirically that
|
| 236 |
+
next-word prediction can teach neural networks a
|
| 237 |
+
substantial amount about syntax (e.g., Hu et al.,
|
| 238 |
+
2020). Second, it is plausible that humans per-
|
| 239 |
+
form some version of next-word prediction during
|
| 240 |
+
sentence processing (Altmann and Kamide, 1999;
|
| 241 |
+
Hale, 2001; Levy, 2008; Kutas et al., 2011) and
|
| 242 |
+
that such prediction may play a role in acquisition
|
| 243 |
+
(Elman, 1991). Thus, while next-word prediction
|
| 244 |
+
is certainly not the only goal of human language
|
| 245 |
+
learners, we view this task as a reasonable first step
|
| 246 |
+
in emulating human language acquisition.
|
| 247 |
+
4.3
|
| 248 |
+
Architectures
|
| 249 |
+
We used two neural network architectures: LSTMs
|
| 250 |
+
(Hochreiter and Schmidhuber, 1997) and Trans-
|
| 251 |
+
formers (Vaswani et al., 2017). We chose these
|
| 252 |
+
models for two reasons. First, they have been the
|
| 253 |
+
most successful architectures in NLP. Thus, we
|
| 254 |
+
have reason to believe that, of the types of low-bias
|
| 255 |
+
models invented, these two are the ones most likely
|
| 256 |
+
to discover linguistic regularities in our CHILDES
|
| 257 |
+
training data. Second, the two architectures pro-
|
| 258 |
+
cess sequences very differently (via recurrence vs.
|
| 259 |
+
via attention). Thus, if both generalize similarly,
|
| 260 |
+
we would have evidence that what was learned is
|
| 261 |
+
strongly evidenced in the data, rather than due to a
|
| 262 |
+
quirk of one particular architecture.
|
| 263 |
+
For our LSTMs, we used 2 layers, a hidden and
|
| 264 |
+
embedding size of 800, a batch size of 20, a dropout
|
| 265 |
+
rate of 0.4, and a learning rate of 10. For our Trans-
|
| 266 |
+
formers, the corresponding values were 4, 800, 10,
|
| 267 |
+
0.2, and 5, and we used 4 attention heads. We chose
|
| 268 |
+
these values based on a hyperparameter search de-
|
| 269 |
+
scribed in Appendix B. All following results are av-
|
| 270 |
+
eraged across 10 runs with different random seeds.
|
| 271 |
+
4.4
|
| 272 |
+
Results: Language Model Quality
|
| 273 |
+
Before testing models on questions, we used per-
|
| 274 |
+
plexity to evaluate how well they captured the basic
|
| 275 |
+
structure of their training domain. As a baseline,
|
| 276 |
+
we used a 5-gram model with Kneser-Ney smooth-
|
| 277 |
+
ing (Kneser and Ney, 1995) trained with KenLM
|
| 278 |
+
(Heafield, 2011). The test set perplexity for the
|
| 279 |
+
5-gram baseline was 24.37, while the average test
|
| 280 |
+
set perplexity for the LSTMs and Transformers
|
| 281 |
+
was 20.05 and 19.69, respectively. For perplexity,
|
| 282 |
+
lower is better. Thus, both neural network types
|
| 283 |
+
outperformed the strong baseline of a smoothed
|
| 284 |
+
5-gram model, showing that they performed well
|
| 285 |
+
at capturing the basic statistics of their training
|
| 286 |
+
domain.5
|
| 287 |
+
4.5
|
| 288 |
+
General Syntactic Evaluation
|
| 289 |
+
As an additional way to check the validity of our
|
| 290 |
+
setup, we evaluated our models on the Zorro dataset
|
| 291 |
+
(Huebner et al., 2021), which is based on BLiMP
|
| 292 |
+
(Warstadt et al., 2020a). Zorro contains 24 evalu-
|
| 293 |
+
ations, each of which targets one syntactic phe-
|
| 294 |
+
nomenon (e.g., subject-verb agreement) and in-
|
| 295 |
+
volves sentence pairs for which one sentence is
|
| 296 |
+
grammatical, and the other is minimally different
|
| 297 |
+
5For an intuitive illustration of our model quality, see the
|
| 298 |
+
sample text generated by them in Appendix H.
|
| 299 |
+
|
| 300 |
+
but ungrammatical (e.g., by violating subject verb
|
| 301 |
+
agreement). A model is said to get a sentence
|
| 302 |
+
pair correct if it assigns a higher probability to the
|
| 303 |
+
grammatical sentence than the ungrammatical one.
|
| 304 |
+
Huebner et al. (2021) showed that Transformers
|
| 305 |
+
trained on CHILDES data can perform well on
|
| 306 |
+
many of the Zorro categories, so if our setup is
|
| 307 |
+
sound, our own models should also perform well
|
| 308 |
+
on Zorro.
|
| 309 |
+
See Appendix D for full results. For each syntac-
|
| 310 |
+
tic phenomenon, most model re-runs scored above
|
| 311 |
+
0.9, though at least one scored near the chance level
|
| 312 |
+
of 0.5. For each re-run of each architecture there
|
| 313 |
+
is at least one phenomenon for which the model
|
| 314 |
+
scores over 0.97, and many models score 1.00 on
|
| 315 |
+
some phenomena. Thus, all models score well on
|
| 316 |
+
at least some syntactic evaluations, attaining results
|
| 317 |
+
comparable to those of Huebner et al. (2021) and
|
| 318 |
+
providing additional support for the validity of our
|
| 319 |
+
setup. We now test whether these models have also
|
| 320 |
+
successfully learned the specific phenomenon that
|
| 321 |
+
we focus on, yes/no questions—a phenomenon not
|
| 322 |
+
included in the Zorro dataset.
|
| 323 |
+
4.6
|
| 324 |
+
Yes/No Questions
|
| 325 |
+
Evaluation Dataset: Forced-Choice Acceptabil-
|
| 326 |
+
ity Judgments
|
| 327 |
+
As a first way to test whether our
|
| 328 |
+
models have learned HIERARCHICALQ, we eval-
|
| 329 |
+
uate whether they assign higher probabilities to
|
| 330 |
+
sentences consistent with HIERARCHICALQ than
|
| 331 |
+
to minimally different sentences that are ungram-
|
| 332 |
+
matical. For this purpose, we create an evaluation
|
| 333 |
+
dataset containing groups of 6 questions, each cre-
|
| 334 |
+
ated by starting with a declarative sentence, such
|
| 335 |
+
as (5), and then deleting the first, main, or neither
|
| 336 |
+
auxiliary, and inserting the first or main auxiliary
|
| 337 |
+
at the front of the sentence.6 For instance, in (6b),
|
| 338 |
+
the first auxiliary has been preposed, and the main
|
| 339 |
+
auxiliary has been deleted.
|
| 340 |
+
(5)
|
| 341 |
+
The dog who has seen a boy did try.
|
| 342 |
+
(6)
|
| 343 |
+
a. Has the dog who seen a boy did try?
|
| 344 |
+
b. Has the dog who has seen a boy try?
|
| 345 |
+
c. Has the dog who has seen a boy did try ?
|
| 346 |
+
d. Did the dog who seen a boy did try?
|
| 347 |
+
e. Did the dog who has seen a boy try?
|
| 348 |
+
f. Did the dog who has seen a boy did try?
|
| 349 |
+
6It would be possible to also use a ‘prepose other’ category,
|
| 350 |
+
where an auxiliary not in the input is inserted (McCoy et al.,
|
| 351 |
+
2018). We excluded this category because using it would raise
|
| 352 |
+
complications about which ‘other’ auxiliary to choose.
|
| 353 |
+
Within each group, we evaluate which question
|
| 354 |
+
the model assigned the highest probability to. If a
|
| 355 |
+
model has correctly learned HIERARCHICALQ, it
|
| 356 |
+
should assign the highest probability to the question
|
| 357 |
+
consistent with this rule, such as (6e).
|
| 358 |
+
Several past papers about yes/no questions have
|
| 359 |
+
used the same general approach (Lewis and El-
|
| 360 |
+
man, 2001; Reali and Christiansen, 2005). How-
|
| 361 |
+
ever, these papers considered only pairs of sen-
|
| 362 |
+
tences, whereas we consider groups of 6 to allow
|
| 363 |
+
for a wider range of possible generalizations that a
|
| 364 |
+
model might have learned.
|
| 365 |
+
To generate the declaratives from which we
|
| 366 |
+
formed groups of 6 questions, we used the context-
|
| 367 |
+
free grammar (CFG) in Appendix F, which has a vo-
|
| 368 |
+
cabulary selected from the most common words in
|
| 369 |
+
CHILDES. Each declarative generated by the CFG
|
| 370 |
+
(e.g., (5)) contains two auxiliary verbs: one before
|
| 371 |
+
the sentence’s main verb and one inside a relative
|
| 372 |
+
clause modifying the subject. One potential prob-
|
| 373 |
+
lem is that some questions are consistent with both
|
| 374 |
+
HIERARCHICALQ and LINEARQ. For instance,
|
| 375 |
+
(7a) can be formed from (7b) with the HIERARCHI-
|
| 376 |
+
CALQ-consistent steps PREPOSE-MAIN,DELETE-
|
| 377 |
+
MAIN, or from (7c) with the LINEARQ-consistent
|
| 378 |
+
steps PREPOSE-FIRST,DELETE-MAIN.
|
| 379 |
+
(7)
|
| 380 |
+
a. Did the boy who did see the person laugh?
|
| 381 |
+
b. The boy who did see the person did laugh.
|
| 382 |
+
c. The boy who did see the person can laugh.
|
| 383 |
+
To avoid this problem, we required that the aux-
|
| 384 |
+
iliary before the main verb must select for a dif-
|
| 385 |
+
ferent verb inflection than the one in the relative
|
| 386 |
+
clause. For instance in (5), did selects for the verb’s
|
| 387 |
+
bare form, while has selects for the past participle
|
| 388 |
+
form. Thus, the auxiliary at the start of the question
|
| 389 |
+
could only correspond to whichever auxiliary in the
|
| 390 |
+
declarative has the same selectional properties.7
|
| 391 |
+
Results: Relative Question Acceptability
|
| 392 |
+
For
|
| 393 |
+
each sentence group, we used per-word perplex-
|
| 394 |
+
ity to see which of the 6 candidates the models
|
| 395 |
+
scored most highly.8 For both LSTMs and Trans-
|
| 396 |
+
formers, the correct category (PREPOSE MAIN,
|
| 397 |
+
DELETE MAIN) was the second-rarest choice, and
|
| 398 |
+
7A model could succeed on this dataset with a rule that
|
| 399 |
+
relates the auxiliary at the start of a question with the last
|
| 400 |
+
auxiliary in the declarative form. Since our models fail on this
|
| 401 |
+
dataset, this consideration is not relevant here.
|
| 402 |
+
8We also explored evaluation of the models with a more
|
| 403 |
+
complex measure called SLOR where we additionally nor-
|
| 404 |
+
malized scores by word frequency (Pauls and Klein, 2012).
|
| 405 |
+
Both metrics produced qualitatively similar results, so we only
|
| 406 |
+
report the simpler metric here. See Appendix C.1.
|
| 407 |
+
|
| 408 |
+
Prepose First
|
| 409 |
+
Prepose Main
|
| 410 |
+
Delete First
|
| 411 |
+
Delete Main
|
| 412 |
+
Delete none
|
| 413 |
+
LSTM
|
| 414 |
+
Transformer
|
| 415 |
+
LSTM
|
| 416 |
+
Transformer
|
| 417 |
+
0.0
|
| 418 |
+
0.5
|
| 419 |
+
1.0
|
| 420 |
+
0.0
|
| 421 |
+
0.5
|
| 422 |
+
1.0
|
| 423 |
+
0.0
|
| 424 |
+
0.5
|
| 425 |
+
1.0
|
| 426 |
+
Preference for question type
|
| 427 |
+
Declarative sentence: The person who has seen this boy did try.
|
| 428 |
+
Has the person who seen
|
| 429 |
+
this boy did try?
|
| 430 |
+
Did the person who seen
|
| 431 |
+
this boy did try?
|
| 432 |
+
Has the person who has
|
| 433 |
+
seen this boy try?
|
| 434 |
+
Did the person who has
|
| 435 |
+
seen this boy try?
|
| 436 |
+
Has the person who has
|
| 437 |
+
seen this boy did try?
|
| 438 |
+
Did the person who has
|
| 439 |
+
seen this boy did try?
|
| 440 |
+
Figure 1: The question types that models prefer when
|
| 441 |
+
offered a choice between 6 questions. These 6 ques-
|
| 442 |
+
tions are formed by modifying a declarative with a rel-
|
| 443 |
+
ative clause on the subject according to ‘prepose’ and
|
| 444 |
+
‘delete’ rules. The correct category is PREPOSE MAIN,
|
| 445 |
+
DELETE MAIN. Within each architecture, the propor-
|
| 446 |
+
tions across all 6 question types necessarily sum to 1.
|
| 447 |
+
Each bar shows the average across 10 model re-runs,
|
| 448 |
+
with single-standard-deviation error bars.
|
| 449 |
+
the most frequent preference was for PREPOSE
|
| 450 |
+
FIRST, DELETE MAIN, a category that is only par-
|
| 451 |
+
tially correct because it references linear order in
|
| 452 |
+
addition to hierarchical structure. (Figure 1).
|
| 453 |
+
Thus, neither model displays preferences con-
|
| 454 |
+
sistent with the correct, fully-hierarchical gener-
|
| 455 |
+
alization. The two model types showed similar
|
| 456 |
+
scores, which may mean that these results are
|
| 457 |
+
largely driven by the statistics of the training data
|
| 458 |
+
that both models share, rather than the models’ dif-
|
| 459 |
+
fering inductive biases.
|
| 460 |
+
One of the incorrect categories—PREPOSE
|
| 461 |
+
MAIN, DELETE NONE, such as (6f)—only re-
|
| 462 |
+
quires reference to hierarchical structure, so it
|
| 463 |
+
could be said to capture the hierarchical nature of
|
| 464 |
+
yes/no questions. Nonetheless, this category was
|
| 465 |
+
also relatively rare: combining the two fully hier-
|
| 466 |
+
archical possibilities (PREPOSE MAIN, DELETE
|
| 467 |
+
MAIN and PREPOSE MAIN, DELETE NONE) ac-
|
| 468 |
+
counts for only 26% of LSTM preferences and
|
| 469 |
+
27% of Transformer preferences, meaning that both
|
| 470 |
+
models over 70% of the time favored a sentence
|
| 471 |
+
generated at least partially based on linear order.
|
| 472 |
+
There are two likely reasons for why our models
|
| 473 |
+
performed so poorly on yes-no questions when they
|
| 474 |
+
performed well on many of the phenomena in the
|
| 475 |
+
Zorro dataset (Section 4.5). First, yes/no questions
|
| 476 |
+
may simply be harder to learn than the other phe-
|
| 477 |
+
nomena; indeed, yes/no questions are often singled
|
| 478 |
+
out as being likely to pose difficulties for a general-
|
| 479 |
+
purpose learner (Section 1). Alternatively, it might
|
| 480 |
+
be that the six-way evaluation we used for yes/no
|
| 481 |
+
questions is stricter than the binary judgments used
|
| 482 |
+
for the Zorro dataset.
|
| 483 |
+
5
|
| 484 |
+
Experiment 2: Question Formation
|
| 485 |
+
The previous experiment was designed to operate
|
| 486 |
+
entirely in the next-word-prediction paradigm, mo-
|
| 487 |
+
tivated by arguments from past literature about
|
| 488 |
+
the strength and relative ecological validity of
|
| 489 |
+
next-word-prediction as a training objective (see
|
| 490 |
+
Section 4.2).
|
| 491 |
+
However, one of this setup’s
|
| 492 |
+
shortcomings is that HIERARCHICALQ describes
|
| 493 |
+
correspondences between questions and declara-
|
| 494 |
+
tives, but Experiment 1 focused on questions alone,
|
| 495 |
+
with no consideration of declaratives.
|
| 496 |
+
In this second experiment, to better capture that
|
| 497 |
+
HIERARCHICALQ is defined over sentence pairs,
|
| 498 |
+
we trained models on a sentence-pair task: trans-
|
| 499 |
+
forming a declarative into a question (McCoy et al.,
|
| 500 |
+
2020). For instance, given the child did learn the
|
| 501 |
+
model must produce did the child learn ?
|
| 502 |
+
We evaluated models in two ways. First, we
|
| 503 |
+
checked if the models’ predictions fully matched
|
| 504 |
+
the correct questions. This full-sentence evaluation
|
| 505 |
+
is demanding, and models might fail this evalua-
|
| 506 |
+
tion for reasons unrelated to our core hypotheses.
|
| 507 |
+
For instance, given the child did learn the model
|
| 508 |
+
might produce did the baby learn, which would be
|
| 509 |
+
marked as incorrect, even though this lexical error
|
| 510 |
+
is not relevant to HIERARCHICALQ.
|
| 511 |
+
As a metric that is less demanding and that also
|
| 512 |
+
more directly targets HIERARCHICALQ, we mea-
|
| 513 |
+
sured if the first word of the output question corre-
|
| 514 |
+
sponded to the first or main auxiliary of the input.
|
| 515 |
+
Critically, LINEARQ and HIERARCHICALQ make
|
| 516 |
+
different predictions for the first word of a question
|
| 517 |
+
so long as the two auxiliaries are distinct: see (4).
|
| 518 |
+
Because this framing lets the model freely generate
|
| 519 |
+
its output (instead of choosing one option from a
|
| 520 |
+
pre-specified set), we allow for the possibility that
|
| 521 |
+
the rule learned by models may not be identical to
|
| 522 |
+
any of our manually-generated hypotheses.
|
| 523 |
+
Solely training models to perform this transfor-
|
| 524 |
+
mation involves the implicit assumption that, when
|
| 525 |
+
children acquire English yes/no questions, the only
|
| 526 |
+
evidence they leverage is English yes/no questions.
|
| 527 |
+
However, other types of sentences may also pro-
|
| 528 |
+
vide useful evidence (Pearl and Mis, 2016): e.g.,
|
| 529 |
+
wh-questions also illustrate subject-auxiliary in-
|
| 530 |
+
|
| 531 |
+
version (Pullum and Scholz, 2002), while, more
|
| 532 |
+
generally, many types of sentences could provide
|
| 533 |
+
evidence that the syntax as a whole is hierarchical
|
| 534 |
+
(Perfors et al., 2011). To explore this possibility,
|
| 535 |
+
we compared a condition in which models were
|
| 536 |
+
only trained to perform question formation (the
|
| 537 |
+
QUESTION FORMATION condition) to another in
|
| 538 |
+
which models were first pre-trained on next-word
|
| 539 |
+
prediction with the exact same setup as in Experi-
|
| 540 |
+
ment 1 before being further trained to perform ques-
|
| 541 |
+
tion formation (the NEXT-WORD PREDICTION +
|
| 542 |
+
QUESTION FORMATION condition).
|
| 543 |
+
5.1
|
| 544 |
+
Dataset
|
| 545 |
+
Training Set
|
| 546 |
+
Our question formation dataset con-
|
| 547 |
+
sisted of the yes/no questions in the CHILDES
|
| 548 |
+
Treebank (Pearl and Sprouse, 2013a,b), a parsed
|
| 549 |
+
subset of CHILDES containing 189,359 sentences.
|
| 550 |
+
We used these parses to extract all yes/no ques-
|
| 551 |
+
tions from the CHILDES Treebank and derive their
|
| 552 |
+
corresponding declarative forms. The resulting
|
| 553 |
+
declarative was concatenated with the question. An
|
| 554 |
+
example declarative/question pair is:
|
| 555 |
+
(8)
|
| 556 |
+
you can spell your name .
|
| 557 |
+
can you
|
| 558 |
+
spell your name ?
|
| 559 |
+
The training set consisted of 10,870 declara-
|
| 560 |
+
tive/question pairs, the validation set 1,360 pairs,
|
| 561 |
+
and the test set 1,358 pairs (we will call this test
|
| 562 |
+
set the randomly-partitioned test set to distinguish
|
| 563 |
+
it from two other evaluation sets discussed below).
|
| 564 |
+
We trained models to perform next-word prediction
|
| 565 |
+
on such concatenated sentence pairs.
|
| 566 |
+
The first-word accuracy of the trained model
|
| 567 |
+
was then computed based on the model’s predic-
|
| 568 |
+
tion for the word after the period in each test exam-
|
| 569 |
+
ple, while the full-sentence accuracy was computed
|
| 570 |
+
based on its predictions for all tokens after the pe-
|
| 571 |
+
riod. All questions in the randomly-partitioned test
|
| 572 |
+
set were withheld from both the question-formation
|
| 573 |
+
training set and the next-word-prediction training
|
| 574 |
+
set. Thus, models had not seen these test examples
|
| 575 |
+
in their training, even in the NEXT-WORD PRE-
|
| 576 |
+
DICTION + QUESTION FORMATION condition in
|
| 577 |
+
which they were trained on both tasks.
|
| 578 |
+
Evaluation Sets
|
| 579 |
+
In addition to the randomly-
|
| 580 |
+
partitioned test set, we used CFGs to generate two
|
| 581 |
+
targeted evaluation sets. As in Experiment 1, we se-
|
| 582 |
+
lected the CFGs’ vocabulary from common words
|
| 583 |
+
in our CHILDES data. In sentences generated from
|
| 584 |
+
the first CFG, the sentence’s first auxiliary was also
|
| 585 |
+
its main auxiliary, so LINEARQ and HIERARCHI-
|
| 586 |
+
CALQ make the same predictions. (8) exemplifies
|
| 587 |
+
the type of declarative-question pair in this dataset.
|
| 588 |
+
We call this dataset FIRST-AUX = MAIN-AUX. For
|
| 589 |
+
sentences generated by the second CFG, the main
|
| 590 |
+
auxiliary was the second auxiliary in the sentence;
|
| 591 |
+
thus, these examples disambiguate LINEARQ and
|
| 592 |
+
HIERARCHICALQ. Example (9) is a declarative-
|
| 593 |
+
question pair from this evaluation set.
|
| 594 |
+
(9) a boy who is playing can try .
|
| 595 |
+
can a
|
| 596 |
+
boy who is playing try ?
|
| 597 |
+
We call this dataset FIRST-AUX ̸= MAIN-AUX.
|
| 598 |
+
See Appendix F for the CFGs used. We sampled
|
| 599 |
+
10,000 declarative sentences from these grammars
|
| 600 |
+
and transformed them into questions according to
|
| 601 |
+
HIERARCHICALQ to create our evaluation sets.
|
| 602 |
+
5.2
|
| 603 |
+
Results
|
| 604 |
+
Randomly-Partitioned Test Set
|
| 605 |
+
The LSTMs
|
| 606 |
+
and Transformers in the QUESTION FORMA-
|
| 607 |
+
TION condition performed well on the randomly-
|
| 608 |
+
partitioned test set, with a full-question accuracy
|
| 609 |
+
of 0.68 ± 0.014 and 0.87 ± 0.005 (averaged across
|
| 610 |
+
10 reruns with margins indicating one standard de-
|
| 611 |
+
viation). The models in the NEXT-WORD PRE-
|
| 612 |
+
DICTION + QUESTION FORMATION condition per-
|
| 613 |
+
formed similarly well, with a full-question accu-
|
| 614 |
+
racy of 0.66 ± 0.008 for the LSTMs and 0.93 ±
|
| 615 |
+
0.004 for the Transformers. For both model types,
|
| 616 |
+
the first-word accuracy for the question was nearly
|
| 617 |
+
1.00 across re-runs. We suspect that Transform-
|
| 618 |
+
ers have a stronger full-question accuracy because
|
| 619 |
+
producing the question requires copying all words
|
| 620 |
+
from the declarative (but in a different order). Copy-
|
| 621 |
+
ing is likely easy for Transformers because they can
|
| 622 |
+
attend to specific words in the prior context, while
|
| 623 |
+
our LSTMs must compress the entire context into a
|
| 624 |
+
fixed-size vector, which may degrade the individual
|
| 625 |
+
word representations. Because both model types
|
| 626 |
+
achieved near-perfect performance on the crucial
|
| 627 |
+
first-word accuracy metric, we conclude that our
|
| 628 |
+
models have successfully learned how to handle
|
| 629 |
+
the types of declarative/question pairs that we ex-
|
| 630 |
+
tracted from the CHILDES Treebank.
|
| 631 |
+
Targeted Evaluation Sets
|
| 632 |
+
On our two targeted
|
| 633 |
+
evaluation sets, models almost never produced the
|
| 634 |
+
complete question correctly. Turning to the more
|
| 635 |
+
lenient measure of first-word accuracy, for exam-
|
| 636 |
+
ples on which LINEARQ and HIERARCHICALQ
|
| 637 |
+
predict the same first output word (FIRST-AUX =
|
| 638 |
+
MAIN-AUX), the Transformer trained only on ques-
|
| 639 |
+
tion formation performed strongly, while the Trans-
|
| 640 |
+
|
| 641 |
+
LSTM
|
| 642 |
+
Transformer
|
| 643 |
+
First-Aux = Main-Aux
|
| 644 |
+
First-Aux ≠ Main-Aux
|
| 645 |
+
HierarchicalQ
|
| 646 |
+
& LinearQ
|
| 647 |
+
HierarchicalQ
|
| 648 |
+
Only
|
| 649 |
+
LinearQ
|
| 650 |
+
Only
|
| 651 |
+
HierarchicalQ
|
| 652 |
+
& LinearQ
|
| 653 |
+
HierarchicalQ
|
| 654 |
+
Only
|
| 655 |
+
LinearQ
|
| 656 |
+
Only
|
| 657 |
+
0.00
|
| 658 |
+
0.25
|
| 659 |
+
0.50
|
| 660 |
+
0.75
|
| 661 |
+
1.00
|
| 662 |
+
0.00
|
| 663 |
+
0.25
|
| 664 |
+
0.50
|
| 665 |
+
0.75
|
| 666 |
+
1.00
|
| 667 |
+
Consistency with rule(s),
|
| 668 |
+
based on first word of question
|
| 669 |
+
Condition
|
| 670 |
+
Question Formation
|
| 671 |
+
Next-Word Prediction
|
| 672 |
+
+ Question Formation
|
| 673 |
+
Figure 2: Proportion of model-produced questions that
|
| 674 |
+
were consistent with the linear rule LINEARQ and/or
|
| 675 |
+
the hierarchical rule HIERARCHICALQ. In the FIRST-
|
| 676 |
+
AUX = MAIN-AUX dataset, the first auxiliary is the
|
| 677 |
+
main auxiliary, so both LINEARQ and HIERARCHI-
|
| 678 |
+
CALQ produce the correct question string. The FIRST-
|
| 679 |
+
AUX ̸= MAIN-AUX dataset disambiguates the two
|
| 680 |
+
rules. Each bar shows the average across 10 model re-
|
| 681 |
+
runs, with error bars showing one standard deviation.
|
| 682 |
+
former trained on both tasks, and both LSTMs,
|
| 683 |
+
performed reasonably well (Figure 2; note mod-
|
| 684 |
+
els could choose any word in their vocabulary to
|
| 685 |
+
begin the output, so chance performance is near
|
| 686 |
+
0.00). For the crucial cases that disambiguate the
|
| 687 |
+
two rules (FIRST-AUX ̸= MAIN-AUX), both mod-
|
| 688 |
+
els in both conditions performed more consistently
|
| 689 |
+
with LINEARQ than HIERARCHICALQ. Training
|
| 690 |
+
on next-word prediction before question formation
|
| 691 |
+
had inconsistent effects: it modestly increased the
|
| 692 |
+
likelihood of hierarchical generalization in LSTMs,
|
| 693 |
+
yet it decreased that likelihood in Transformers.
|
| 694 |
+
Lexical Specificity
|
| 695 |
+
In Appendix G, we further
|
| 696 |
+
break down the FIRST-AUX ̸= MAIN-AUX results
|
| 697 |
+
based the auxiliaries’ identity. The generalization
|
| 698 |
+
pattern varied considerably across auxiliary pairs.
|
| 699 |
+
For some auxiliary pairs, the auxiliary chosen to
|
| 700 |
+
begin the question was usually neither auxiliary
|
| 701 |
+
in the input (Figure 3, left facet). For other pairs,
|
| 702 |
+
models usually chose the first auxiliary, regardless
|
| 703 |
+
of lexical identity (Figure 3, middle facet). Finally,
|
| 704 |
+
for some pairs, the auxiliary chosen was usually
|
| 705 |
+
the same one, regardless of whether it was the first
|
| 706 |
+
or main auxiliary (Figure 3, right facet).
|
| 707 |
+
Generalization based on lexical identity is rarely
|
| 708 |
+
considered in past discussions of English yes/no
|
| 709 |
+
question acquisition. Of the papers on this phe-
|
| 710 |
+
nomenon (see Clark and Lappin (2010), Lasnik
|
| 711 |
+
and Lidz (2017), and Pearl (2021) for overviews),
|
| 712 |
+
the only one to our knowledge that discusses lexi-
|
| 713 |
+
have and has
|
| 714 |
+
can and do
|
| 715 |
+
have and did
|
| 716 |
+
Move−first−aux
|
| 717 |
+
Move−main−aux
|
| 718 |
+
Move−have
|
| 719 |
+
Move−has
|
| 720 |
+
Move−first−aux
|
| 721 |
+
Move−main−aux
|
| 722 |
+
Move−can
|
| 723 |
+
Move−do
|
| 724 |
+
Move−first−aux
|
| 725 |
+
Move−main−aux
|
| 726 |
+
Move−have
|
| 727 |
+
Move−did
|
| 728 |
+
0.0
|
| 729 |
+
0.5
|
| 730 |
+
1.0
|
| 731 |
+
First word behavior
|
| 732 |
+
consistent with rule
|
| 733 |
+
Comparison
|
| 734 |
+
First−vs−main
|
| 735 |
+
Aux−vs−Aux
|
| 736 |
+
Figure 3: Lexical specificity in model behavior. Each
|
| 737 |
+
facet considers only the evaluation examples contain-
|
| 738 |
+
ing the two auxiliaries in the facet heading; e.g., the
|
| 739 |
+
can and do facet includes, for example, the inputs the
|
| 740 |
+
children who can play do learn and the children who
|
| 741 |
+
do play can learn. The bars show the proportion of
|
| 742 |
+
model predictions for the first word of the output that
|
| 743 |
+
are consistent with four potential movement rules, aver-
|
| 744 |
+
aged across 10 model re-runs and with error bars show-
|
| 745 |
+
ing one standard deviation above and below the mean.
|
| 746 |
+
This plot only shows an illustrative subset of auxiliary
|
| 747 |
+
pairs for one model type (Transformers in the NEXT-
|
| 748 |
+
WORD PREDICTION + QUESTION FORMATION con-
|
| 749 |
+
dition); see Appendix G for the full results.
|
| 750 |
+
cal specificity is Frank and Mathis (2007), which
|
| 751 |
+
studied models trained on synthetic data. Our re-
|
| 752 |
+
sults highlight the importance of testing for a broad
|
| 753 |
+
range of generalizations: Lexically-specific hy-
|
| 754 |
+
potheses appear attractive for our low-bias learners,
|
| 755 |
+
so an account of what biases can yield human-like
|
| 756 |
+
learning should rule out these lexically-specific hy-
|
| 757 |
+
potheses along with linear ones.
|
| 758 |
+
6
|
| 759 |
+
Discussion
|
| 760 |
+
We have found that, when trained on child-directed
|
| 761 |
+
speech, two types of standard neural networks per-
|
| 762 |
+
formed reasonably well at capturing the statistical
|
| 763 |
+
properties of the dataset, yet their handling of En-
|
| 764 |
+
glish yes/no questions was more consistent with
|
| 765 |
+
a linear rule LINEARQ than the correct hierarchi-
|
| 766 |
+
cal rule HIERARCHICALQ. These results support
|
| 767 |
+
the hypothesis that a learner requires a hierarchical
|
| 768 |
+
bias to consistently learn hierarchical rules when
|
| 769 |
+
learning from the linguistic data children receive.
|
| 770 |
+
6.1
|
| 771 |
+
Takeaways for LSTMs and Transformers
|
| 772 |
+
When trained on massive corpora, LSTMs and
|
| 773 |
+
Transformers perform impressively on some syn-
|
| 774 |
+
tactic evaluations. Based on such results, it is tempt-
|
| 775 |
+
ing to conclude that the general-purpose biases of
|
| 776 |
+
these architectures suffice to yield human-like syn-
|
| 777 |
+
|
| 778 |
+
tax acquisition. Our results caution against this
|
| 779 |
+
interpretation: When we trained the same architec-
|
| 780 |
+
tures on data more similar to children’s input, they
|
| 781 |
+
failed to learn the structure of English yes/no ques-
|
| 782 |
+
tions. Thus, at least when learning from text alone,
|
| 783 |
+
LSTMs and Transformers do not display human-
|
| 784 |
+
like language learning—they do not generalize as
|
| 785 |
+
humans do from the data that humans receive.
|
| 786 |
+
6.2
|
| 787 |
+
Takeaways for the Poverty of the
|
| 788 |
+
Stimulus Debate
|
| 789 |
+
Below we specify four possible positions in the
|
| 790 |
+
poverty-of-the-stimulus debate about the adequacy
|
| 791 |
+
of children’s input for inducing hierarchical rules in
|
| 792 |
+
low-bias learners, arranged from assuming the most
|
| 793 |
+
limited to the most expansive innate component:
|
| 794 |
+
(10) Any inductive biases: Any learner trained on
|
| 795 |
+
CHILDES will generalize like humans do.
|
| 796 |
+
(11) Any
|
| 797 |
+
inductive
|
| 798 |
+
biases
|
| 799 |
+
that
|
| 800 |
+
enable
|
| 801 |
+
in-
|
| 802 |
+
distribution learning: Any learner that cap-
|
| 803 |
+
tures the statistical patterns of the training dis-
|
| 804 |
+
tribution will generalize to HIERARCHICALQ.
|
| 805 |
+
(12) Some non-hierarchical inductive biases:
|
| 806 |
+
Some general-purpose learners will generalize
|
| 807 |
+
as humans do, but others will not.
|
| 808 |
+
(13) Only a hierarchical inductive bias:
|
| 809 |
+
No
|
| 810 |
+
general-purpose learners will generalize as
|
| 811 |
+
humans do: hierarchical biases are necessary.
|
| 812 |
+
Position (10) is clearly false: many learners can-
|
| 813 |
+
not learn certain aspects of syntax, no matter their
|
| 814 |
+
training data (e.g., bigram models cannot capture
|
| 815 |
+
long-distance dependencies). Our work shows that
|
| 816 |
+
position (11) is also false: Though our models per-
|
| 817 |
+
formed well on the in-distribution test sets of Exper-
|
| 818 |
+
iments 1 and 2, they did not generalize in human-
|
| 819 |
+
like ways. This leaves positions (12) and (13),
|
| 820 |
+
which our existing results cannot differentiate. It is
|
| 821 |
+
possible that only learners with hierarchical induc-
|
| 822 |
+
tive biases can demonstrate human-like language
|
| 823 |
+
learning (position (13)), but also that some learners
|
| 824 |
+
without this bias can succeed (position (12))—just
|
| 825 |
+
not the learners we tested. For further discussion
|
| 826 |
+
of how computational modeling can bear on learn-
|
| 827 |
+
ability arguments, see Wilcox et al. (2021).
|
| 828 |
+
One potential solution supporting position (12)
|
| 829 |
+
would be that learners leverage the hierarchical
|
| 830 |
+
structure of some syntactic phenomenon to help
|
| 831 |
+
conclude that other, impoverished phenomena are
|
| 832 |
+
hierarchical (Perfors et al., 2011; Mulligan et al.,
|
| 833 |
+
2021). However, our results from Experiment 2
|
| 834 |
+
show that giving learners access to a wider range
|
| 835 |
+
of phenomena does not automatically improve hi-
|
| 836 |
+
erarchical generalization: Models’ performance on
|
| 837 |
+
question formation was not substantially improved
|
| 838 |
+
(and in some cases was even harmed) when they
|
| 839 |
+
were trained not just on question formation but also
|
| 840 |
+
on next-word prediction on the entire CHILDES
|
| 841 |
+
corpus. Thus, although training on text that con-
|
| 842 |
+
tains many linguistic phenomena can give mod-
|
| 843 |
+
els a hierarchical inductive bias when the training
|
| 844 |
+
is done over large Internet corpora (Warstadt and
|
| 845 |
+
Bowman, 2020; Mueller et al., 2022), our results
|
| 846 |
+
provide evidence that this conclusion does not ex-
|
| 847 |
+
tend to models trained on child-directed speech.
|
| 848 |
+
Though both (12) and (13) remain as possibil-
|
| 849 |
+
ities, we believe that our results more strongly
|
| 850 |
+
support (13). Of all currently available general-
|
| 851 |
+
purpose learners, LSTMs and Transformers are the
|
| 852 |
+
best at modeling the probabilistic structure of lin-
|
| 853 |
+
guistic data. Therefore, if child-directed speech
|
| 854 |
+
contains clear evidence for the hierarchical nature
|
| 855 |
+
of yes/no questions—evidence so clear that at least
|
| 856 |
+
some general-purpose learners could recognize it—
|
| 857 |
+
it is likely that LSTMs and Transformers would
|
| 858 |
+
be among the set of general-purpose learners that
|
| 859 |
+
could use this evidence to make hierarchical gener-
|
| 860 |
+
alizations in our experiments. The fact that these
|
| 861 |
+
architectures instead predominantly favored linear
|
| 862 |
+
generalizations therefore supports position (13).
|
| 863 |
+
6.3
|
| 864 |
+
How to test for HIERARCHICALQ
|
| 865 |
+
We have argued that an ideal simulation of the
|
| 866 |
+
acquisition of English yes/no questions would have
|
| 867 |
+
the following properties:
|
| 868 |
+
(14) The training data should be similar to chil-
|
| 869 |
+
dren’s linguistic input.
|
| 870 |
+
(15) The training task should be ecologically valid.
|
| 871 |
+
(16) The evaluation method should focus on corre-
|
| 872 |
+
spondences between pairs of sentences rather
|
| 873 |
+
than the acceptability of individual sentences.
|
| 874 |
+
Property (14) motivated our use of text from
|
| 875 |
+
CHILDES as the training data. We are not aware
|
| 876 |
+
of a single experimental setup that fully satisfies
|
| 877 |
+
both Property (15) and Property (16), so we instead
|
| 878 |
+
used two experiments, each one focusing on one
|
| 879 |
+
property at the cost of satisfying the other one less
|
| 880 |
+
well. Experiment 1 works entirely in the context
|
| 881 |
+
of the relatively ecologically valid task of next-
|
| 882 |
+
word prediction, motivated by Property (15), but its
|
| 883 |
+
|
| 884 |
+
evaluation is only based on the acceptability of in-
|
| 885 |
+
dividual sentences, failing to satisfy Property (16).
|
| 886 |
+
Experiment 2 fully satisfies Property (16) by using
|
| 887 |
+
an evaluation based on sentence pairs, at the cost of
|
| 888 |
+
including a less ecologically-valid training compo-
|
| 889 |
+
nent based on sentence transformations. Both ex-
|
| 890 |
+
periments yielded qualitatively similar conclusions
|
| 891 |
+
(failure of models to learn HIERARCHICALQ).
|
| 892 |
+
6.4
|
| 893 |
+
Quantity of Training Data
|
| 894 |
+
The size of our training set was plausibly within
|
| 895 |
+
the range from which children can acquire HIER-
|
| 896 |
+
ARCHICALQ. Crain and Nakayama (1987) found
|
| 897 |
+
that children between ages 3 and 5 behaved much
|
| 898 |
+
more consistently with HIERARCHICALQ than
|
| 899 |
+
LINEARQ. Though these children made many er-
|
| 900 |
+
rors, their errors were usually compatible with a
|
| 901 |
+
hierarchical rule (e.g., PREPOSE MAIN, DELETE
|
| 902 |
+
NONE errors: see Section 4.6). By age 3, Ameri-
|
| 903 |
+
can children receive approximately 10 to 33 mil-
|
| 904 |
+
lion words of input (Hart and Risley, 1995), and
|
| 905 |
+
the 8.5 million words of our training set is close
|
| 906 |
+
to the lower end of that range. Thus, it is reason-
|
| 907 |
+
able to suppose that a learner that generalizes as
|
| 908 |
+
children do would favor HIERARCHICALQ after
|
| 909 |
+
being trained on our training set. Our models, in
|
| 910 |
+
contrast, regularly preferred sentences generated
|
| 911 |
+
in ways based on linear order (Figures 1 and 2), a
|
| 912 |
+
category of error that is very rare in children (Crain
|
| 913 |
+
and Nakayama, 1987; Ambridge et al., 2008).
|
| 914 |
+
In order to give our models the strongest chance
|
| 915 |
+
of generalizing correctly, it would have been ideal
|
| 916 |
+
to provide a quantity of data closer to 33 million
|
| 917 |
+
words, the high end of Hart and Risley’s range. Our
|
| 918 |
+
data source did not contain enough text to make this
|
| 919 |
+
possible, but future work could investigate ways to
|
| 920 |
+
augment the data using other sources.
|
| 921 |
+
6.5
|
| 922 |
+
Type of Training Data
|
| 923 |
+
Our training set was both qualitatively and quanti-
|
| 924 |
+
tatively closer to children’s input than the massive
|
| 925 |
+
Internet corpora standardly used to train models in
|
| 926 |
+
NLP (Linzen, 2020). This difference is important:
|
| 927 |
+
Lin et al. (2019), Warstadt and Bowman (2020),
|
| 928 |
+
and Mueller et al. (2022) all found evidence that
|
| 929 |
+
models trained on large Internet corpora performed
|
| 930 |
+
well on yes/no questions evaluations, whereas our
|
| 931 |
+
models trained on CHILDES performed poorly—
|
| 932 |
+
though we cannot be certain the differences in re-
|
| 933 |
+
sults are solely due to differences in the training
|
| 934 |
+
data, since these prior papers used different model
|
| 935 |
+
architectures, training tasks, and evaluation setups.
|
| 936 |
+
Though our training data are more similar to
|
| 937 |
+
children’s input than massive Internet corpora are,
|
| 938 |
+
differences remain. Our experiments omit several
|
| 939 |
+
aspects of a child’s experience that might help them
|
| 940 |
+
acquire syntax, such as prosody (Morgan and De-
|
| 941 |
+
muth, 1996), visual information (Shi et al., 2019),
|
| 942 |
+
and meaning (Fitz and Chang, 2017; Abend et al.,
|
| 943 |
+
2017), all of which might correlate with syntac-
|
| 944 |
+
tic structure and thus provide cues to the correct
|
| 945 |
+
hierarchical generalization. On the other hand,
|
| 946 |
+
our dataset might present an easier learning sce-
|
| 947 |
+
nario than children are faced with, because chil-
|
| 948 |
+
dren must learn to segment the speech stream into
|
| 949 |
+
words (Lakhotia et al., 2021), while our models do
|
| 950 |
+
not need to. Further, though real-world grounding
|
| 951 |
+
could provide helpful information, learners might
|
| 952 |
+
struggle to leverage this information due to diffi-
|
| 953 |
+
culty determining what is being discussed in the
|
| 954 |
+
physical world (Gleitman et al., 2005).
|
| 955 |
+
7
|
| 956 |
+
Conclusion
|
| 957 |
+
In this work, we trained two types of neural net-
|
| 958 |
+
works (LSTMs and Transformers) on sentences of
|
| 959 |
+
the types available to children and then analyzed
|
| 960 |
+
what they had learned about English yes/no ques-
|
| 961 |
+
tions. Across several evaluation paradigms, these
|
| 962 |
+
models failed to generalize in human-like ways:
|
| 963 |
+
Humans display hierarchical generalization, while
|
| 964 |
+
the models’ generalization was instead based on
|
| 965 |
+
linear order and individual words’ identities. Our
|
| 966 |
+
results support the hypothesis that human-like lin-
|
| 967 |
+
guistic generalization requires biases stronger than
|
| 968 |
+
those of LSTMs and Transformers. Future work
|
| 969 |
+
should investigate what inductive biases enable suc-
|
| 970 |
+
cessful generalization. One approach would be to
|
| 971 |
+
test architectures with built-in hierarchical struc-
|
| 972 |
+
ture; past work has shown that such architectures
|
| 973 |
+
have a hierarchical bias (McCoy et al., 2020) and
|
| 974 |
+
generalize better on the hierarchical phenomenon
|
| 975 |
+
of subject-verb agreement (Kuncoro et al., 2018;
|
| 976 |
+
Lepori et al., 2020), so they may also generalize bet-
|
| 977 |
+
ter on English yes/no questions. A final direction
|
| 978 |
+
would be to expand the input beyond words alone
|
| 979 |
+
so that learners can leverage hierarchical structure
|
| 980 |
+
that is present in other modalities, such as hierar-
|
| 981 |
+
chical structure in visual scenes.
|
| 982 |
+
Ethics Statement
|
| 983 |
+
Use of human data:
|
| 984 |
+
While we did not collect
|
| 985 |
+
any new human data ourselves, many of our anal-
|
| 986 |
+
yses involved the use of prior datasets within the
|
| 987 |
+
|
| 988 |
+
CHILDES database. All of these datasets were
|
| 989 |
+
collected in accordance with IRB policies at the
|
| 990 |
+
institutions of the data collectors, and all followed
|
| 991 |
+
standard practices in obtaining informed consent
|
| 992 |
+
and deidentifying data.9
|
| 993 |
+
Risks and limitations:
|
| 994 |
+
The main risk of our pro-
|
| 995 |
+
posed analyses is that future work using the same
|
| 996 |
+
analyses might draw overly strong conclusions
|
| 997 |
+
based on increased model performance, leading
|
| 998 |
+
to overestimates of model strength. Such overesti-
|
| 999 |
+
mates are an issue because they can lead users to
|
| 1000 |
+
place more trust in a model than is warranted.
|
| 1001 |
+
To clarify, we view strong performance on our
|
| 1002 |
+
evaluation datasets as necessary but not sufficient to
|
| 1003 |
+
demonstrate human-like learning. Thus, if models
|
| 1004 |
+
perform poorly on our datasets (as the models we
|
| 1005 |
+
evaluated did), then we have strong reason to con-
|
| 1006 |
+
clude that models are not learning in human-like
|
| 1007 |
+
ways. If future models perform better, such results
|
| 1008 |
+
would be consistent with human-like learning but
|
| 1009 |
+
would not conclusively establish that models learn
|
| 1010 |
+
as humans do, as they might instead be using some
|
| 1011 |
+
shallow heuristic that is not controlled for in our
|
| 1012 |
+
datasets. In other words, a criterion that is neces-
|
| 1013 |
+
sary but not sufficient facilitates strong conclusions
|
| 1014 |
+
about failure but does not facilitate strong conclu-
|
| 1015 |
+
sions about success. If future papers are faced with
|
| 1016 |
+
models that are more successful, such papers would
|
| 1017 |
+
ideally supplement results based on our datasets
|
| 1018 |
+
with analyses of models’ internal strategies in order
|
| 1019 |
+
to more conclusively establish that what they have
|
| 1020 |
+
learned is not a spurious heuristic.
|
| 1021 |
+
References
|
| 1022 |
+
Omri Abend, Tom Kwiatkowski, Nathaniel J Smith,
|
| 1023 |
+
Sharon Goldwater, and Mark Steedman. 2017. Boot-
|
| 1024 |
+
strapping language acquisition. Cognition, 164:116–
|
| 1025 |
+
143.
|
| 1026 |
+
Gerry TM Altmann and Yuki Kamide. 1999. Incremen-
|
| 1027 |
+
tal interpretation at verbs: Restricting the domain of
|
| 1028 |
+
subsequent reference. Cognition, 73(3):247–264.
|
| 1029 |
+
Ben Ambridge, Caroline F Rowland, and Julian M
|
| 1030 |
+
Pine. 2008. Is structure dependence an innate con-
|
| 1031 |
+
straint? New experimental evidence from children’s
|
| 1032 |
+
complex-question production.
|
| 1033 |
+
Cognitive Science,
|
| 1034 |
+
32(1):222–255.
|
| 1035 |
+
Robert Berwick, Paul Pietroski, Beracah Yankama, and
|
| 1036 |
+
Noam Chomsky. 2011. Poverty of the stimulus re-
|
| 1037 |
+
visited. Cognitive science, 35:1207–42.
|
| 1038 |
+
9https://talkbank.org/share/irb/
|
| 1039 |
+
Rens Bod, Margaux Smets, et al. 2012. Empiricist so-
|
| 1040 |
+
lutions to nativist problems using tree-substitution
|
| 1041 |
+
grammars. Workshop on Computational Models of
|
| 1042 |
+
Language Acquisition and Loss: EACL.
|
| 1043 |
+
Noam Chomsky. 1965. Aspects of the Theory of Syntax,
|
| 1044 |
+
50 edition. The MIT Press.
|
| 1045 |
+
Noam Chomsky. 1980.
|
| 1046 |
+
Rules and representations.
|
| 1047 |
+
Columbia University Press.
|
| 1048 |
+
Alexander Clark and Rémi Eyraud. 2007. Polynomial
|
| 1049 |
+
identification in the limit of substitutable context-
|
| 1050 |
+
free languages.
|
| 1051 |
+
Journal of Machine Learning Re-
|
| 1052 |
+
search, 8(8).
|
| 1053 |
+
Alexander Clark and Shalom Lappin. 2010. Linguis-
|
| 1054 |
+
tic Nativism and the Poverty of the Stimulus. John
|
| 1055 |
+
Wiley & Sons.
|
| 1056 |
+
Stephen Crain and Mineharu Nakayama. 1987. Struc-
|
| 1057 |
+
ture dependence in grammar formation. Language,
|
| 1058 |
+
pages 522–543.
|
| 1059 |
+
Jeffrey L Elman. 1991.
|
| 1060 |
+
Distributed representations,
|
| 1061 |
+
simple recurrent networks, and grammatical struc-
|
| 1062 |
+
ture. Machine learning, 7(2):195–225.
|
| 1063 |
+
Hartmut Fitz and Franklin Chang. 2017. Meaningful
|
| 1064 |
+
questions: The acquisition of auxiliary inversion in
|
| 1065 |
+
a connectionist model of sentence production. Cog-
|
| 1066 |
+
nition, 166:225–250.
|
| 1067 |
+
Robert Frank and Donald Mathis. 2007. Transforma-
|
| 1068 |
+
tional networks. Models of Human Language Acqui-
|
| 1069 |
+
sition, 22.
|
| 1070 |
+
Lila R Gleitman, Kimberly Cassidy, Rebecca Nappa,
|
| 1071 |
+
Anna Papafragou, and John C Trueswell. 2005.
|
| 1072 |
+
Hard words. Language learning and development,
|
| 1073 |
+
1(1):23–64.
|
| 1074 |
+
Kristina Gulordava, Piotr Bojanowski, Edouard Grave,
|
| 1075 |
+
Tal Linzen, and Marco Baroni. 2018.
|
| 1076 |
+
Colorless
|
| 1077 |
+
green recurrent networks dream hierarchically.
|
| 1078 |
+
John Hale. 2001. A probabilistic Earley parser as a psy-
|
| 1079 |
+
cholinguistic model. In Second Meeting of the North
|
| 1080 |
+
American Chapter of the Association for Computa-
|
| 1081 |
+
tional Linguistics.
|
| 1082 |
+
Betty Hart and Todd R Risley. 1995. Meaningful differ-
|
| 1083 |
+
ences in the everyday experience of young American
|
| 1084 |
+
children. Paul H Brookes Publishing.
|
| 1085 |
+
Kenneth Heafield. 2011. KenLM: Faster and smaller
|
| 1086 |
+
language model queries.
|
| 1087 |
+
In Proceedings of the
|
| 1088 |
+
Sixth Workshop on Statistical Machine Translation,
|
| 1089 |
+
pages 187–197, Edinburgh, Scotland. Association
|
| 1090 |
+
for Computational Linguistics.
|
| 1091 |
+
Sepp Hochreiter and Jürgen Schmidhuber. 1997.
|
| 1092 |
+
Long short-term memory.
|
| 1093 |
+
Neural computation,
|
| 1094 |
+
9(8):1735–1780.
|
| 1095 |
+
|
| 1096 |
+
Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox,
|
| 1097 |
+
and Roger Levy. 2020.
|
| 1098 |
+
A systematic assessment
|
| 1099 |
+
of syntactic generalization in neural language mod-
|
| 1100 |
+
els.
|
| 1101 |
+
In Proceedings of the 58th Annual Meeting
|
| 1102 |
+
of the Association for Computational Linguistics,
|
| 1103 |
+
pages 1725–1744, Online. Association for Compu-
|
| 1104 |
+
tational Linguistics.
|
| 1105 |
+
Philip A. Huebner, Elior Sulem, Cynthia Fisher, and
|
| 1106 |
+
Dan Roth. 2021. BabyBERTa: Learning more gram-
|
| 1107 |
+
mar with small-scale child-directed language.
|
| 1108 |
+
In
|
| 1109 |
+
Proceedings of CoNLL.
|
| 1110 |
+
Xuân-Nga Cao Kam, Iglika Stoyneshka, Lidiya Torny-
|
| 1111 |
+
ova, Janet D Fodor, and William G Sakas. 2008. Bi-
|
| 1112 |
+
grams and the richness of the stimulus. Cognitive
|
| 1113 |
+
Science, 32(4):771–787.
|
| 1114 |
+
Reinhard Kneser and Hermann Ney. 1995. Improved
|
| 1115 |
+
backing-off for m-gram language modeling. 1995
|
| 1116 |
+
International Conference on Acoustics, Speech, and
|
| 1117 |
+
Signal Processing, 1:181–184 vol.1.
|
| 1118 |
+
Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yo-
|
| 1119 |
+
gatama, Stephen Clark, and Phil Blunsom. 2018.
|
| 1120 |
+
LSTMs can learn syntax-sensitive dependencies
|
| 1121 |
+
well, but modeling structure makes them better. In
|
| 1122 |
+
Proceedings of the 56th Annual Meeting of the As-
|
| 1123 |
+
sociation for Computational Linguistics (Volume 1:
|
| 1124 |
+
Long Papers), pages 1426–1436, Melbourne, Aus-
|
| 1125 |
+
tralia. Association for Computational Linguistics.
|
| 1126 |
+
Marta Kutas, Katherine A DeLong, and Nathaniel J
|
| 1127 |
+
Smith. 2011. A look around at what lies ahead: Pre-
|
| 1128 |
+
diction and predictability in language processing. In
|
| 1129 |
+
Predictions in the brain: Using our past to generate
|
| 1130 |
+
a future.
|
| 1131 |
+
Kushal Lakhotia, Eugene Kharitonov, Wei-Ning Hsu,
|
| 1132 |
+
Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh
|
| 1133 |
+
Nguyen, Jade Copet, Alexei Baevski, Abdelrahman
|
| 1134 |
+
Mohamed, et al. 2021. On generative spoken lan-
|
| 1135 |
+
guage modeling from raw audio. Transactions of the
|
| 1136 |
+
Association for Computational Linguistics, 9:1336–
|
| 1137 |
+
1354.
|
| 1138 |
+
Howard Lasnik and Jeffrey L Lidz. 2017. The argu-
|
| 1139 |
+
ment from the poverty of the stimulus. The Oxford
|
| 1140 |
+
handbook of universal grammar, pages 221–248.
|
| 1141 |
+
Julie Anne Legate and Charles D Yang. 2002.
|
| 1142 |
+
Em-
|
| 1143 |
+
pirical re-assessment of stimulus poverty arguments.
|
| 1144 |
+
The Linguistic Review, 19(1-2):151–162.
|
| 1145 |
+
Michael Lepori, Tal Linzen, and R. Thomas McCoy.
|
| 1146 |
+
2020.
|
| 1147 |
+
Representations of syntax [MASK] useful:
|
| 1148 |
+
Effects of constituency and dependency structure in
|
| 1149 |
+
recursive LSTMs. In Proceedings of the 58th An-
|
| 1150 |
+
nual Meeting of the Association for Computational
|
| 1151 |
+
Linguistics, pages 3306–3316, Online. Association
|
| 1152 |
+
for Computational Linguistics.
|
| 1153 |
+
Roger Levy. 2008. Expectation-based syntactic com-
|
| 1154 |
+
prehension. Cognition, 106(3):1126–1177.
|
| 1155 |
+
John Lewis and Jeffrey Elman. 2001. Learnability and
|
| 1156 |
+
the statistical structure of language: Poverty of stim-
|
| 1157 |
+
ulus arguments revisited. Proceedings of the 26th
|
| 1158 |
+
Annual Boston University Conference on Language
|
| 1159 |
+
Development, 1.
|
| 1160 |
+
Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019.
|
| 1161 |
+
Open sesame:
|
| 1162 |
+
Getting inside BERT’s linguistic
|
| 1163 |
+
knowledge. In Proceedings of the 2019 ACL Work-
|
| 1164 |
+
shop BlackboxNLP: Analyzing and Interpreting Neu-
|
| 1165 |
+
ral Networks for NLP, pages 241–253, Florence,
|
| 1166 |
+
Italy. Association for Computational Linguistics.
|
| 1167 |
+
Tal Linzen. 2020. How can we accelerate progress to-
|
| 1168 |
+
wards human-like linguistic generalization? In Pro-
|
| 1169 |
+
ceedings of the 58th Annual Meeting of the Asso-
|
| 1170 |
+
ciation for Computational Linguistics, pages 5210–
|
| 1171 |
+
5217, Online. Association for Computational Lin-
|
| 1172 |
+
guistics.
|
| 1173 |
+
Brian MacWhinney. 2000.
|
| 1174 |
+
The CHILDES project:
|
| 1175 |
+
Tools for analyzing talk. Lawrence Erlbaum Asso-
|
| 1176 |
+
ciates.
|
| 1177 |
+
R. Thomas McCoy, Robert Frank, and Tal Linzen.
|
| 1178 |
+
2018. Revisiting the poverty of the stimulus: hier-
|
| 1179 |
+
archical generalization without a hierarchical bias in
|
| 1180 |
+
recurrent neural networks.
|
| 1181 |
+
R. Thomas McCoy, Robert Frank, and Tal Linzen.
|
| 1182 |
+
2020. Does syntax need to grow on trees? sources of
|
| 1183 |
+
hierarchical inductive bias in sequence-to-sequence
|
| 1184 |
+
networks.
|
| 1185 |
+
James L. Morgan and Katherine Demuth. 1996. Signal
|
| 1186 |
+
to syntax: Bootstrapping from speech to grammar in
|
| 1187 |
+
early acquisition. Psychology Press.
|
| 1188 |
+
Aaron Mueller, Robert Frank, Tal Linzen, Luheng
|
| 1189 |
+
Wang, and Sebastian Schuster. 2022. Coloring the
|
| 1190 |
+
blank slate: Pre-training imparts a hierarchical in-
|
| 1191 |
+
ductive bias to sequence-to-sequence models.
|
| 1192 |
+
In
|
| 1193 |
+
Findings of the Association for Computational Lin-
|
| 1194 |
+
guistics: ACL 2022, pages 1352–1368, Dublin, Ire-
|
| 1195 |
+
land. Association for Computational Linguistics.
|
| 1196 |
+
Karl Mulligan, Robert Frank, and Tal Linzen. 2021.
|
| 1197 |
+
Structure here, bias there: Hierarchical generaliza-
|
| 1198 |
+
tion by jointly learning syntactic transformations. In
|
| 1199 |
+
Proceedings of the Society for Computation in Lin-
|
| 1200 |
+
guistics 2021, pages 125–135, Online. Association
|
| 1201 |
+
for Computational Linguistics.
|
| 1202 |
+
Ludovica Pannitto and Aurélie Herbelot. 2020. Recur-
|
| 1203 |
+
rent babbling: evaluating the acquisition of gram-
|
| 1204 |
+
mar from limited input data.
|
| 1205 |
+
In Proceedings of
|
| 1206 |
+
the 24th Conference on Computational Natural Lan-
|
| 1207 |
+
guage Learning, pages 165–176, Online. Associa-
|
| 1208 |
+
tion for Computational Linguistics.
|
| 1209 |
+
Adam Pauls and Dan Klein. 2012. Large-scale syntac-
|
| 1210 |
+
tic language modeling with treelets. In Proceedings
|
| 1211 |
+
of the 50th Annual Meeting of the Association for
|
| 1212 |
+
Computational Linguistics (Volume 1: Long Papers),
|
| 1213 |
+
pages 959–968, Jeju Island, Korea. Association for
|
| 1214 |
+
Computational Linguistics.
|
| 1215 |
+
|
| 1216 |
+
Lisa Pearl. 2021. Poverty of the stimulus without tears.
|
| 1217 |
+
Language Learning and Development, pages 1–40.
|
| 1218 |
+
Lisa Pearl and Benjamin Mis. 2016. The role of in-
|
| 1219 |
+
direct positive evidence in syntactic acquisition: A
|
| 1220 |
+
look at anaphoric one. Language, 92:1–30.
|
| 1221 |
+
Lisa Pearl and Jon Sprouse. 2013a.
|
| 1222 |
+
Computational
|
| 1223 |
+
models of acquisition for islands. Experimental syn-
|
| 1224 |
+
tax and islands effects, pages 109–131.
|
| 1225 |
+
Lisa Pearl and Jon Sprouse. 2013b. Syntactic islands
|
| 1226 |
+
and learning biases: Combining experimental syntax
|
| 1227 |
+
and computational modeling to investigate the lan-
|
| 1228 |
+
guage acquisition problem. Language Acquisition,
|
| 1229 |
+
20(1):23–68.
|
| 1230 |
+
Andrew Perfors, Josh Tenenbaum, and Terry Regier.
|
| 1231 |
+
2011. The learnability of abstract syntactic princi-
|
| 1232 |
+
ples. Cognition, 118:306–338.
|
| 1233 |
+
Jackson Petty and Robert Frank. 2021.
|
| 1234 |
+
Trans-
|
| 1235 |
+
formers
|
| 1236 |
+
generalize
|
| 1237 |
+
linearly.
|
| 1238 |
+
arXiv
|
| 1239 |
+
preprint
|
| 1240 |
+
arXiv:2109.12036.
|
| 1241 |
+
Geoffrey K. Pullum and Barbara C. Scholz. 2002. Em-
|
| 1242 |
+
pirical assessment of stimulus poverty arguments.
|
| 1243 |
+
The Linguistic Review, 18(1-2):9–50.
|
| 1244 |
+
Florencia Reali and Morten H. Christiansen. 2005. Un-
|
| 1245 |
+
covering the richness of the stimulus: Structure de-
|
| 1246 |
+
pendence and indirect statistical evidence.
|
| 1247 |
+
Cogni-
|
| 1248 |
+
tive Science, 29(6):1007–1028.
|
| 1249 |
+
Haoyue Shi, Jiayuan Mao, Kevin Gimpel, and Karen
|
| 1250 |
+
Livescu. 2019. Visually grounded neural syntax ac-
|
| 1251 |
+
quisition. In Proceedings of the 57th Annual Meet-
|
| 1252 |
+
ing of the Association for Computational Linguistics,
|
| 1253 |
+
pages 1842–1861, Florence, Italy. Association for
|
| 1254 |
+
Computational Linguistics.
|
| 1255 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
|
| 1256 |
+
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
|
| 1257 |
+
Kaiser, and Illia Polosukhin. 2017. Attention is all
|
| 1258 |
+
you need. In Advances in neural information pro-
|
| 1259 |
+
cessing systems, pages 5998–6008.
|
| 1260 |
+
Alex Warstadt and Samuel R Bowman. 2020. Can neu-
|
| 1261 |
+
ral networks acquire a structural bias from raw lin-
|
| 1262 |
+
guistic data? Proceedings of the 42nd Annual Con-
|
| 1263 |
+
ference of the Cognitive Science Society.
|
| 1264 |
+
Alex Warstadt and Samuel R Bowman. 2022. What
|
| 1265 |
+
artificial
|
| 1266 |
+
neural
|
| 1267 |
+
networks
|
| 1268 |
+
can
|
| 1269 |
+
tell
|
| 1270 |
+
us
|
| 1271 |
+
about
|
| 1272 |
+
human language acquisition.
|
| 1273 |
+
arXiv preprint
|
| 1274 |
+
arXiv:2208.07998.
|
| 1275 |
+
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo-
|
| 1276 |
+
hananey, Wei Peng, Sheng-Fu Wang, and Samuel R
|
| 1277 |
+
Bowman. 2020a.
|
| 1278 |
+
BLiMP: The benchmark of lin-
|
| 1279 |
+
guistic minimal pairs for english.
|
| 1280 |
+
Transactions
|
| 1281 |
+
of the Association for Computational Linguistics,
|
| 1282 |
+
8:377–392.
|
| 1283 |
+
Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun
|
| 1284 |
+
Liu, and Samuel R. Bowman. 2020b.
|
| 1285 |
+
Learning
|
| 1286 |
+
which features matter: RoBERTa acquires a prefer-
|
| 1287 |
+
ence for linguistic generalizations (eventually). In
|
| 1288 |
+
Proceedings of the 2020 Conference on Empirical
|
| 1289 |
+
Methods in Natural Language Processing (EMNLP),
|
| 1290 |
+
pages 217–235, Online. Association for Computa-
|
| 1291 |
+
tional Linguistics.
|
| 1292 |
+
Ethan Wilcox, Richard Futrell, and Roger Levy. 2021.
|
| 1293 |
+
Using computational models to test syntactic learn-
|
| 1294 |
+
ability. lingbuzz preprint lingbuzz/006327.
|
| 1295 |
+
Ethan Wilcox, Roger Levy, Takashi Morita, and
|
| 1296 |
+
Richard Futrell. 2018.
|
| 1297 |
+
What do RNN language
|
| 1298 |
+
models learn about filler–gap dependencies?
|
| 1299 |
+
In
|
| 1300 |
+
Proceedings of the 2018 EMNLP Workshop Black-
|
| 1301 |
+
boxNLP: Analyzing and Interpreting Neural Net-
|
| 1302 |
+
works for NLP, pages 211–221, Brussels, Belgium.
|
| 1303 |
+
Association for Computational Linguistics.
|
| 1304 |
+
Taha Yasseri, András Kornai, and János Kertész. 2012.
|
| 1305 |
+
A practical approach to language complexity: A
|
| 1306 |
+
Wikipedia case study. PLoS ONE, 7(11):e48386.
|
| 1307 |
+
A
|
| 1308 |
+
CHILDES preprocessing details
|
| 1309 |
+
The train, test, and validation split kept each docu-
|
| 1310 |
+
ment in the corpora intact to allow for learning of
|
| 1311 |
+
context. Since a document roughly correspond to
|
| 1312 |
+
a single recording session, and the sentence order
|
| 1313 |
+
within each document was not randomized, the net-
|
| 1314 |
+
works could utilize cross sentence context while
|
| 1315 |
+
predicting the next word.
|
| 1316 |
+
Generally, we kept the data as close to the actual
|
| 1317 |
+
input that the child receives as possible. However,
|
| 1318 |
+
in some cases we modified tokenization to match
|
| 1319 |
+
the CHILDES Treebank, a syntactically parsed sub-
|
| 1320 |
+
set of the CHILDES corpora. For instance, con-
|
| 1321 |
+
tractions were split, e.g. we replaced don’t with do
|
| 1322 |
+
n’t,
|
| 1323 |
+
The ages of the children vary by corpus, ranging
|
| 1324 |
+
from six months to twelve years. Almost 95%
|
| 1325 |
+
(49/52) of the corpora consist of transcriptions with
|
| 1326 |
+
children between one and six years of age.
|
| 1327 |
+
Note that for Experiment 2, we used the same vo-
|
| 1328 |
+
cabulary as we used in Experiment 1, which means
|
| 1329 |
+
that the words that were not present in the Exper-
|
| 1330 |
+
iment 1’s vocabulary were replaced with <unk>
|
| 1331 |
+
tokens.
|
| 1332 |
+
The unprocessed CHILDES datasets were down-
|
| 1333 |
+
loaded in XML format from the online XML ver-
|
| 1334 |
+
sion10 of the CHILDES database (MacWhinney,
|
| 1335 |
+
2000).11
|
| 1336 |
+
A modified NLTK CHILDESCorpus-
|
| 1337 |
+
10https://childes.talkbank.org/
|
| 1338 |
+
data-xml/
|
| 1339 |
+
11https://childes.talkbank.org
|
| 1340 |
+
|
| 1341 |
+
Reader12 was used to parse the XML into plain
|
| 1342 |
+
text for training.
|
| 1343 |
+
The CHILDES dataset is licensed for use under
|
| 1344 |
+
a CC BY-NC-SA 3.0 license13. Under the terms of
|
| 1345 |
+
this license, the data can be freely used and adapted,
|
| 1346 |
+
as long as it is not used for commercial purposes
|
| 1347 |
+
and as long as attribution is provided.14 Our usage
|
| 1348 |
+
fits these criteria.
|
| 1349 |
+
Though CHILDES contains many corpora of
|
| 1350 |
+
many languages, we use only corpora from the
|
| 1351 |
+
North American English subset of CHILDES,
|
| 1352 |
+
which contains child-directed speech with many
|
| 1353 |
+
different North American children.
|
| 1354 |
+
See the
|
| 1355 |
+
CHILDES database for more details.
|
| 1356 |
+
By the CHILDES rules for data citation.15 re-
|
| 1357 |
+
search that relies on more than 6 of the corpora
|
| 1358 |
+
need only cite the overall database, not each indi-
|
| 1359 |
+
vidual corpus.
|
| 1360 |
+
All the data on CHILDES must adhere to
|
| 1361 |
+
IRB guidelines,16 including a requirement for
|
| 1362 |
+
anonymity.
|
| 1363 |
+
The final dataset will be included in our GitHub
|
| 1364 |
+
repository, to be released soon. This dataset is not
|
| 1365 |
+
intended for commercial use.
|
| 1366 |
+
CHILDES corpora included
|
| 1367 |
+
The CHILDES
|
| 1368 |
+
corpora that we used were: Bates, Bernstein, Bliss,
|
| 1369 |
+
Bloom70, Bloom73, Bohannon, Braunwald, Brent,
|
| 1370 |
+
Brown, Carterette, Clark, Cornell, Demetras1,
|
| 1371 |
+
Demetras2, EllisWeismer, Evans, Feldman, Garvey,
|
| 1372 |
+
Gathercole, Gelman, Gillam, Gleason, HSLLD,
|
| 1373 |
+
Haggerty, Hall, Higginson, Kuczaj, MacWhin-
|
| 1374 |
+
ney, McCune, McMillan, Morisset, NH, Nelson,
|
| 1375 |
+
NewEngland, NewmanRatner, Normal, POLER,
|
| 1376 |
+
Peters, Post, Rollins, Sachs, Sawyer, Snow, Soder-
|
| 1377 |
+
strom, Sprott, Suppes, Tardif, Valian, VanHouten,
|
| 1378 |
+
VanKleeck, Warren, Weist.
|
| 1379 |
+
B
|
| 1380 |
+
Hyperparameter Search and Model
|
| 1381 |
+
Implementation
|
| 1382 |
+
We conducted a hyperparameter search for each
|
| 1383 |
+
of the architectures we investigated (LSTMs and
|
| 1384 |
+
Transformers). Our broad goal in this paper is to
|
| 1385 |
+
investigate the extent to which capturing the statis-
|
| 1386 |
+
tical properties of the CHILDES dataset naturally
|
| 1387 |
+
12https://www.nltk.org/howto/childes.
|
| 1388 |
+
html
|
| 1389 |
+
13https://talkbank.org/share/rules.html
|
| 1390 |
+
14https://creativecommons.org/licenses/
|
| 1391 |
+
by-nc-sa/3.0/
|
| 1392 |
+
15https://talkbank.org/share/citation.
|
| 1393 |
+
html
|
| 1394 |
+
16https://talkbank.org/share/irb/
|
| 1395 |
+
leads a learner to capture the structure of yes/no
|
| 1396 |
+
questions. Therefore, we sought to find the hyper-
|
| 1397 |
+
parameter settings that made models most effective
|
| 1398 |
+
at capturing the statistical properties of CHILDES
|
| 1399 |
+
data, a goal which we operationalized as finding
|
| 1400 |
+
the model with the lowest perplexity.
|
| 1401 |
+
B.1
|
| 1402 |
+
Hyperparameter search
|
| 1403 |
+
LSTMs
|
| 1404 |
+
For LSTMs we explored the following
|
| 1405 |
+
hyper-parameters via a grid search for a total of
|
| 1406 |
+
144 models.
|
| 1407 |
+
1. layers: 2
|
| 1408 |
+
2. hidden and embedding size: 200, 800
|
| 1409 |
+
3. batch size: 20, 80
|
| 1410 |
+
4. dropout rate: 0.0, 0.2, 0.4, 0.6
|
| 1411 |
+
5. learning rate: 5.0, 10.0, 20.0
|
| 1412 |
+
6. random seed: 3 per parameter combination,
|
| 1413 |
+
unique for each LSTM
|
| 1414 |
+
The LSTM model with the lowest perplexity on the
|
| 1415 |
+
validation set after training had 2 layers, a hidden
|
| 1416 |
+
and embedding size of 800, a batch size of 20, a
|
| 1417 |
+
dropout rate of 0.4, and a learning rate of 10.17
|
| 1418 |
+
A LSTM model with these hyperparameters has
|
| 1419 |
+
37,620,294 parameters.
|
| 1420 |
+
Transformers
|
| 1421 |
+
For the Transformers we per-
|
| 1422 |
+
formed a hyperparameter sweep over the following
|
| 1423 |
+
hyper-parameters for a total of 84 models.
|
| 1424 |
+
1. layers: 2, 4, 8, 16
|
| 1425 |
+
2. context size: 50, 100, 500
|
| 1426 |
+
3. hidden and embedding size: 200, 800, 1600
|
| 1427 |
+
4. heads: 2, 4, 8, 16
|
| 1428 |
+
5. batch size: 20, 80, 160
|
| 1429 |
+
6. dropout rate: 0.0, 0.2, 0.4, 0.6
|
| 1430 |
+
7. learning rate: 0.5, 1.0, 5.0, 10.0, 20.0
|
| 1431 |
+
8. random seed: 3 per parameter combination
|
| 1432 |
+
17The hyperparameters we explored for the LSTMs
|
| 1433 |
+
were
|
| 1434 |
+
those
|
| 1435 |
+
of
|
| 1436 |
+
Gulordava
|
| 1437 |
+
et
|
| 1438 |
+
al.
|
| 1439 |
+
(2018),
|
| 1440 |
+
the
|
| 1441 |
+
code
|
| 1442 |
+
for which can be found at
|
| 1443 |
+
https://github.com/
|
| 1444 |
+
facebookresearch/colorlessgreenRNNs
|
| 1445 |
+
|
| 1446 |
+
LSTMs
|
| 1447 |
+
Prepose First
|
| 1448 |
+
Prepose Main
|
| 1449 |
+
Delete First
|
| 1450 |
+
0.01
|
| 1451 |
+
0.14
|
| 1452 |
+
Delete Main
|
| 1453 |
+
0.39
|
| 1454 |
+
0.12
|
| 1455 |
+
Delete None
|
| 1456 |
+
0.20
|
| 1457 |
+
0.14
|
| 1458 |
+
Table 1: Numerical results for LSTMs’ preference for
|
| 1459 |
+
questions consistent with combinations of ‘prepose’
|
| 1460 |
+
and ‘delete’ rules. Within each architecture, the propor-
|
| 1461 |
+
tion preferences across all 6 question types necessarily
|
| 1462 |
+
sum to 1.
|
| 1463 |
+
The Transformer model with the lowest perplexities
|
| 1464 |
+
after training had 4 layers, a context size of 500,
|
| 1465 |
+
a hidden size of 800, a batch size of 10, 4 heads,
|
| 1466 |
+
a dropout rate of 0.2, and a learning rate of 5.0.
|
| 1467 |
+
A Transformer model with these parameters has
|
| 1468 |
+
42,759,494 parameters.
|
| 1469 |
+
B.2
|
| 1470 |
+
Comment on model size
|
| 1471 |
+
Although neural networks generally perform better
|
| 1472 |
+
as they increase in size, the best-performing models
|
| 1473 |
+
that we found were not the largest ones. This re-
|
| 1474 |
+
sult is consistent with the finding of Warstadt et al.
|
| 1475 |
+
(2020b) that, for small training sets, smaller lan-
|
| 1476 |
+
guage models sometimes outperform larger ones.
|
| 1477 |
+
Thus, it is unlikely that scaling up models beyond
|
| 1478 |
+
the range we investigated would have yielded bet-
|
| 1479 |
+
ter CHILDES language models than the ones we
|
| 1480 |
+
trained.
|
| 1481 |
+
B.3
|
| 1482 |
+
Implementation
|
| 1483 |
+
All
|
| 1484 |
+
models
|
| 1485 |
+
were
|
| 1486 |
+
implemented
|
| 1487 |
+
in
|
| 1488 |
+
Py-
|
| 1489 |
+
Torch
|
| 1490 |
+
by
|
| 1491 |
+
building
|
| 1492 |
+
on
|
| 1493 |
+
code
|
| 1494 |
+
from
|
| 1495 |
+
https:
|
| 1496 |
+
//github.com/facebookresearch/
|
| 1497 |
+
colorlessgreenRNNs
|
| 1498 |
+
and
|
| 1499 |
+
https:
|
| 1500 |
+
//github.com/pytorch/examples/
|
| 1501 |
+
tree/main/word_language_model,
|
| 1502 |
+
and
|
| 1503 |
+
trained using Nvidia k80 GPUs. The final models
|
| 1504 |
+
will be included in our GitHub repository, which
|
| 1505 |
+
will be released soon.
|
| 1506 |
+
These models are not
|
| 1507 |
+
intended for commercial use.
|
| 1508 |
+
C
|
| 1509 |
+
PREPOSE-ONE&DELETE-ONE Full
|
| 1510 |
+
Results
|
| 1511 |
+
See Table 1 and Table 2 for these results. See Table
|
| 1512 |
+
1 and Table 2 for these results.
|
| 1513 |
+
C.1
|
| 1514 |
+
Results using SLOR
|
| 1515 |
+
See Table 3 and Table 4 for these results.
|
| 1516 |
+
Transformers
|
| 1517 |
+
Prepose First
|
| 1518 |
+
Prepose Main
|
| 1519 |
+
Delete First
|
| 1520 |
+
0.01
|
| 1521 |
+
0.16
|
| 1522 |
+
Delete Main
|
| 1523 |
+
0.31
|
| 1524 |
+
0.06
|
| 1525 |
+
Delete None
|
| 1526 |
+
0.25
|
| 1527 |
+
0.21
|
| 1528 |
+
Table 2: Numerical results for Transformers’ prefer-
|
| 1529 |
+
ence for questions consistent with combinations of ‘pre-
|
| 1530 |
+
pose’ and ‘delete’ rules. Within each architecture, the
|
| 1531 |
+
proportion preferences across all 6 question types nec-
|
| 1532 |
+
essarily sum to 1.
|
| 1533 |
+
LSTMs
|
| 1534 |
+
Prepose First
|
| 1535 |
+
Prepose Main
|
| 1536 |
+
Delete First
|
| 1537 |
+
0.01
|
| 1538 |
+
0.14
|
| 1539 |
+
Delete Main
|
| 1540 |
+
0.33
|
| 1541 |
+
0.80
|
| 1542 |
+
Delete None
|
| 1543 |
+
0.26
|
| 1544 |
+
0.18
|
| 1545 |
+
Table 3: Analysis of LSTMs’ preference for questions
|
| 1546 |
+
consistent with combinations of ‘prepose’ and ‘delete’
|
| 1547 |
+
rules, evaluated using SLOR. Within each architecture,
|
| 1548 |
+
the proportion preferences across all 6 question types
|
| 1549 |
+
necessarily sum to 1.
|
| 1550 |
+
Transformers
|
| 1551 |
+
Prepose First
|
| 1552 |
+
Prepose Main
|
| 1553 |
+
Delete First
|
| 1554 |
+
0.01
|
| 1555 |
+
0.15
|
| 1556 |
+
Delete Main
|
| 1557 |
+
0.27
|
| 1558 |
+
0.40
|
| 1559 |
+
Delete None
|
| 1560 |
+
0.29
|
| 1561 |
+
0.24
|
| 1562 |
+
Table 4: Analysis of Transformers’ preference for ques-
|
| 1563 |
+
tions consistent with combinations of ‘prepose’ and
|
| 1564 |
+
‘delete’ rules, evaluated using SLOR. Within each ar-
|
| 1565 |
+
chitecture, the proportion preferences across all 6 ques-
|
| 1566 |
+
tion types necessarily sum to 1.
|
| 1567 |
+
D
|
| 1568 |
+
BabyBERTa dataset evaluation
|
| 1569 |
+
For an illustrative subset of the results on the Zorro
|
| 1570 |
+
evaluation dataset (discussed in Section 4.5), see
|
| 1571 |
+
Figure 4. For the full results, see Figure 5.
|
| 1572 |
+
E
|
| 1573 |
+
Move-One Dataset Results
|
| 1574 |
+
One approach used in several past papers (e.g.,
|
| 1575 |
+
Lewis and Elman (2001) and Reali and Chris-
|
| 1576 |
+
tiansen (2005)) is to evaluate models using pairs
|
| 1577 |
+
of sentences that can be formed by starting with a
|
| 1578 |
+
declarative sentence (e.g., (17)) and moving one of
|
| 1579 |
+
its auxiliaries to the front of the sentence. The first
|
| 1580 |
+
sentence in each pair (e.g., (18a) ) follows HIER-
|
| 1581 |
+
ARCHICALQ, because the main auxiliary is moved,
|
| 1582 |
+
while the second (e.g., (18b)), follows LINEARQ
|
| 1583 |
+
because the first auxiliary is moved.
|
| 1584 |
+
(17) The children who are talking are sleeping.
|
| 1585 |
+
(18) a. Are the children who are talking sleeping?
|
| 1586 |
+
b. Are the children who talking are sleeping?
|
| 1587 |
+
|
| 1588 |
+
1
|
| 1589 |
+
0.46
|
| 1590 |
+
0.98
|
| 1591 |
+
0.41
|
| 1592 |
+
0.95
|
| 1593 |
+
0.99
|
| 1594 |
+
0.97
|
| 1595 |
+
0.95
|
| 1596 |
+
0.64
|
| 1597 |
+
0.99
|
| 1598 |
+
0.43
|
| 1599 |
+
0.96
|
| 1600 |
+
0.41
|
| 1601 |
+
0.88
|
| 1602 |
+
0.95
|
| 1603 |
+
0.96
|
| 1604 |
+
0.97
|
| 1605 |
+
0.54
|
| 1606 |
+
LSTM 02
|
| 1607 |
+
LSTM 03
|
| 1608 |
+
LSTM 08
|
| 1609 |
+
Transformer 02
|
| 1610 |
+
Transformer 03
|
| 1611 |
+
Transformer 08
|
| 1612 |
+
irreg_v
|
| 1613 |
+
sv_agr_rc
|
| 1614 |
+
swap_arg
|
| 1615 |
+
Zorro Evaluation
|
| 1616 |
+
Model
|
| 1617 |
+
0.5
|
| 1618 |
+
0.6
|
| 1619 |
+
0.7
|
| 1620 |
+
0.8
|
| 1621 |
+
0.9
|
| 1622 |
+
1.0
|
| 1623 |
+
Proportion
|
| 1624 |
+
Correct
|
| 1625 |
+
Figure 4: The performance of a selected subset of
|
| 1626 |
+
model re-runs on a selected subset of the Zorro evalua-
|
| 1627 |
+
tions. Each Zorro evaluation targets a specific syntactic
|
| 1628 |
+
phenomenon—in the cases shown here, irregular verbs,
|
| 1629 |
+
subject-verb agreement across relative clauses, and cor-
|
| 1630 |
+
rect argument ordering.
|
| 1631 |
+
If a model assigns a higher probability to (18a)
|
| 1632 |
+
than (18b), that is evidence that the models favors
|
| 1633 |
+
HIERARCHICALQ over LINEARQ. While this pref-
|
| 1634 |
+
erence is a necessary component of correctly learn-
|
| 1635 |
+
ing HIERARCHICALQ, it is by no means sufficient:
|
| 1636 |
+
indeed, Kam et al. (2008) showed that models can
|
| 1637 |
+
prefer sentences consistent with HIERARCHICALQ
|
| 1638 |
+
over sentences consistent with LINEARQ due to
|
| 1639 |
+
shallow n-gram statistics rather than due to knowl-
|
| 1640 |
+
edge of hierarchical structure.
|
| 1641 |
+
More generally,
|
| 1642 |
+
there are infinitely many other incorrect hypotheses
|
| 1643 |
+
besides LINEARQ, and demonstrating successful
|
| 1644 |
+
learning of HIERARCHICALQ would require ruling
|
| 1645 |
+
out all of them. Investigating all possibilities is
|
| 1646 |
+
intractable, but we can at least investigate a few
|
| 1647 |
+
additional plausible ones. Thus, in the main paper
|
| 1648 |
+
we depart from prior work by considering a greater
|
| 1649 |
+
number of candidate sentences than just the pairs
|
| 1650 |
+
of sentences used in prior work.
|
| 1651 |
+
To create the MOVE-ONE dataset, we ran-
|
| 1652 |
+
domly sampled 10,000 declarative sentences from
|
| 1653 |
+
our CFGs for which the first and main auxiliary
|
| 1654 |
+
were identical and then modified them to give
|
| 1655 |
+
10,000 sentence pairs. To create the PREPOSE-
|
| 1656 |
+
ONE&DELETE-ONE dataset, we randomly sam-
|
| 1657 |
+
pled a different 10,000 declarative sentences from
|
| 1658 |
+
our CFGs for which the first and main auxiliary
|
| 1659 |
+
were different and then we modified them to give
|
| 1660 |
+
10,000 6-tuples of sentences. See Appendix F for
|
| 1661 |
+
more details about the CFGs.
|
| 1662 |
+
F
|
| 1663 |
+
Context Free Grammars
|
| 1664 |
+
Figure 6 contains the context-free grammar used
|
| 1665 |
+
for the analyses in Section 4.6. Figures 7 and 8 con-
|
| 1666 |
+
tain the context-free grammars used for the targeted
|
| 1667 |
+
evaluation sets in Section 5.2. Figure 9 contains
|
| 1668 |
+
the vocabulary used for all of these datasets.
|
| 1669 |
+
G
|
| 1670 |
+
Breakdown by lexical identity
|
| 1671 |
+
Here we further break down models’ predictions
|
| 1672 |
+
for the FIRST-AUX ̸= MAIN-AUX evaluation set
|
| 1673 |
+
based on the identities of the two auxiliaries in
|
| 1674 |
+
the input sentence. Figure 10 gives the results for
|
| 1675 |
+
the LSTM in the QUESTION FORMATION condi-
|
| 1676 |
+
tion; Figure 11 for the LSTM in the NEXT-WORD
|
| 1677 |
+
PREDICTION + QUESTION FORMATION condi-
|
| 1678 |
+
tion; Figure 12 for the Transformer in the QUES-
|
| 1679 |
+
TION FORMATION condition; and Figure 13 for the
|
| 1680 |
+
for the Transformer in the NEXT-WORD PREDIC-
|
| 1681 |
+
TION + QUESTION FORMATION condition.
|
| 1682 |
+
H
|
| 1683 |
+
Example generated text
|
| 1684 |
+
Figure 14 gives some example text generated by our
|
| 1685 |
+
models. Models trained on next-word prediction
|
| 1686 |
+
produce their predictions as a probability distribu-
|
| 1687 |
+
tion over the vocabulary. To use such models to
|
| 1688 |
+
generate text, we sample a word from this distribu-
|
| 1689 |
+
tion then use that word as the model’s input for the
|
| 1690 |
+
next time step.
|
| 1691 |
+
|
| 1692 |
+
89 79
|
| 1693 |
+
67 89 86 82 98 91 100 92 100 97 85
|
| 1694 |
+
88 60 56 78 40
|
| 1695 |
+
67 87 71 59 96
|
| 1696 |
+
89 100 91 100 96 85 88 61
|
| 1697 |
+
46 39 39 74 88
|
| 1698 |
+
78 59 98 88 88
|
| 1699 |
+
62 89 79 83 98
|
| 1700 |
+
59 56
|
| 1701 |
+
68 41 60 86 69 66
|
| 1702 |
+
95 90 87 80 89
|
| 1703 |
+
68 80 99 88 100 90 100 94 85 92
|
| 1704 |
+
61 98
|
| 1705 |
+
87 88 85 88 80 82
|
| 1706 |
+
97 90 100 93 99
|
| 1707 |
+
96 83 86 59 43
|
| 1708 |
+
48 39 68 90 73
|
| 1709 |
+
85 97
|
| 1710 |
+
91 100 93 100 95 84
|
| 1711 |
+
91 60 60 82 38
|
| 1712 |
+
64 89 77 63 96
|
| 1713 |
+
85 79 88 89 80
|
| 1714 |
+
84 89
|
| 1715 |
+
61 53 51 36 61 89
|
| 1716 |
+
74 62 99 88 83
|
| 1717 |
+
89 88 79 80 98
|
| 1718 |
+
90 99 93 100 94
|
| 1719 |
+
86 73
|
| 1720 |
+
63 95 90 85 60 89
|
| 1721 |
+
58 77 97 90 100 90 100 97 83 89
|
| 1722 |
+
60 50 76 37 70
|
| 1723 |
+
90 81
|
| 1724 |
+
81 97 90 100 93 100 95 84 89 64 50
|
| 1725 |
+
54 37 64 89 73
|
| 1726 |
+
61 97 89 85 80
|
| 1727 |
+
100 96
|
| 1728 |
+
81 87 61 55 72 36
|
| 1729 |
+
73 93 73 61 98
|
| 1730 |
+
87 89 64 89 83
|
| 1731 |
+
81 96 90 100 91
|
| 1732 |
+
38 70
|
| 1733 |
+
90 76 59 97 89 84
|
| 1734 |
+
78 87 85 78 98
|
| 1735 |
+
88 100 91 100 97
|
| 1736 |
+
82 91 58 46 69
|
| 1737 |
+
79 77
|
| 1738 |
+
46 89 54 62 92 77
|
| 1739 |
+
99 81 99 95 72
|
| 1740 |
+
75 54 53 61 42
|
| 1741 |
+
50 81 64 61 83
|
| 1742 |
+
82 97
|
| 1743 |
+
83 99 94 71 78 54
|
| 1744 |
+
43 72 45 48 85
|
| 1745 |
+
71 59 96 80 74
|
| 1746 |
+
58 91 62 62 93
|
| 1747 |
+
54 48
|
| 1748 |
+
77 41 62 79 73 58
|
| 1749 |
+
88 81 79 65 91
|
| 1750 |
+
59 65 95 80 93
|
| 1751 |
+
88 98 96 74 78
|
| 1752 |
+
60 88
|
| 1753 |
+
75 74 61 92 51 63
|
| 1754 |
+
95 84 100 88 98
|
| 1755 |
+
93 73 80 53 38
|
| 1756 |
+
78 42 53 83 73
|
| 1757 |
+
70 95
|
| 1758 |
+
81 99 83 98 95 74
|
| 1759 |
+
77 54 36 50 43
|
| 1760 |
+
47 83 65 58 90
|
| 1761 |
+
78 72 82 92 53
|
| 1762 |
+
74 76
|
| 1763 |
+
53 44 55 42 47 80
|
| 1764 |
+
72 59 90 78 72
|
| 1765 |
+
87 92 58 66 96
|
| 1766 |
+
80 99 88 98 94
|
| 1767 |
+
80 73
|
| 1768 |
+
61 93 77 72 90 91
|
| 1769 |
+
53 65 90 80 98
|
| 1770 |
+
90 98 95 73 83
|
| 1771 |
+
55 43 44 44 33
|
| 1772 |
+
91 67
|
| 1773 |
+
64 96 83 98 81 100 97 72 79 54 40
|
| 1774 |
+
58 40 54 80 71
|
| 1775 |
+
60 92 83 69 38
|
| 1776 |
+
98 96
|
| 1777 |
+
71 75 55 47 53 42
|
| 1778 |
+
49 79 86 58 88
|
| 1779 |
+
79 81 64 91 61
|
| 1780 |
+
62 92 84 97 85
|
| 1781 |
+
42 53
|
| 1782 |
+
86 74 63 88 76 80
|
| 1783 |
+
93 90 53 63 94
|
| 1784 |
+
83 99 86 96 96
|
| 1785 |
+
75 82 55 50 53
|
| 1786 |
+
LSTM 01
|
| 1787 |
+
LSTM 02
|
| 1788 |
+
LSTM 03
|
| 1789 |
+
LSTM 04
|
| 1790 |
+
LSTM 05
|
| 1791 |
+
LSTM 06
|
| 1792 |
+
LSTM 07
|
| 1793 |
+
LSTM 08
|
| 1794 |
+
LSTM 09
|
| 1795 |
+
LSTM 10
|
| 1796 |
+
Transformer 01
|
| 1797 |
+
Transformer 02
|
| 1798 |
+
Transformer 03
|
| 1799 |
+
Transformer 04
|
| 1800 |
+
Transformer 05
|
| 1801 |
+
Transformer 06
|
| 1802 |
+
Transformer 07
|
| 1803 |
+
Transformer 08
|
| 1804 |
+
Transformer 09
|
| 1805 |
+
Transformer 10
|
| 1806 |
+
agreement_determiner_noun−across_1_adjective
|
| 1807 |
+
agreement_determiner_noun−between_neighbors
|
| 1808 |
+
agreement_subject_verb−across_prepositional_phrase
|
| 1809 |
+
agreement_subject_verb−across_relative_clause
|
| 1810 |
+
agreement_subject_verb−in_question_with_aux
|
| 1811 |
+
agreement_subject_verb−in_simple_question
|
| 1812 |
+
anaphor_agreement−pronoun_gender
|
| 1813 |
+
argument_structure−dropped_argument
|
| 1814 |
+
argument_structure−swapped_arguments
|
| 1815 |
+
argument_structure−transitive
|
| 1816 |
+
binding−principle_a
|
| 1817 |
+
case−subjective_pronoun
|
| 1818 |
+
ellipsis−n_bar
|
| 1819 |
+
filler−gap−wh_question_object
|
| 1820 |
+
filler−gap−wh_question_subject
|
| 1821 |
+
irregular−verb
|
| 1822 |
+
island−effects−adjunct_island
|
| 1823 |
+
island−effects−coordinate_structure_constraint
|
| 1824 |
+
local_attractor−in_question_with_aux
|
| 1825 |
+
npi_licensing−matrix_question
|
| 1826 |
+
npi_licensing−only_npi_licensor
|
| 1827 |
+
quantifiers−existential_there
|
| 1828 |
+
quantifiers−superlative
|
| 1829 |
+
Evaluation
|
| 1830 |
+
Model
|
| 1831 |
+
40
|
| 1832 |
+
60
|
| 1833 |
+
80
|
| 1834 |
+
100
|
| 1835 |
+
% Correct
|
| 1836 |
+
Figure 5: Results on the targeted syntactic evaluations in Huebner et al. (2021) in percent accuracy. Evaluation
|
| 1837 |
+
names in Figure 4 were shortened.
|
| 1838 |
+
|
| 1839 |
+
S
|
| 1840 |
+
→ {NP_S RC_S_BARE MAIN-AUX VP_S_PAST}
|
| 1841 |
+
S
|
| 1842 |
+
→ {NP_S RC_S_PAST MAIN-AUX VP_S_BARE}
|
| 1843 |
+
S
|
| 1844 |
+
→ {NP_S RC_S_BARE MAIN-AUX VP_S_PROG}
|
| 1845 |
+
S
|
| 1846 |
+
→ {NP_S RC_S_PROG MAIN-AUX VP_S_BARE}
|
| 1847 |
+
S
|
| 1848 |
+
→ {NP_S RC_S_PAST MAIN-AUX VP_S_PROG}
|
| 1849 |
+
S
|
| 1850 |
+
→ {NP_S RC_S_PROG MAIN-AUX VP_S_PAST}
|
| 1851 |
+
S
|
| 1852 |
+
→ {NP_P RC_P_BARE MAIN-AUX VP_P_PAST}
|
| 1853 |
+
S
|
| 1854 |
+
→ {NP_P RC_P_PAST MAIN-AUX VP_P_BARE}
|
| 1855 |
+
S
|
| 1856 |
+
→ {NP_P RC_P_BARE MAIN-AUX VP_P_PROG}
|
| 1857 |
+
S
|
| 1858 |
+
→ {NP_P RC_P_PROG MAIN-AUX VP_P_BARE}
|
| 1859 |
+
S
|
| 1860 |
+
→ {NP_P RC_P_PAST MAIN-AUX VP_P_PROG}
|
| 1861 |
+
S
|
| 1862 |
+
→ {NP_P RC_P_PROG MAIN-AUX VP_P_PAST}
|
| 1863 |
+
NP_S
|
| 1864 |
+
→ {Det_S N_S}
|
| 1865 |
+
NP_P
|
| 1866 |
+
→ {Det_P N_P}
|
| 1867 |
+
NP_O
|
| 1868 |
+
→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep
|
| 1869 |
+
Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P}
|
| 1870 |
+
VP_S_BARE
|
| 1871 |
+
→ {Aux_S IV }
|
| 1872 |
+
VP_S_BARE
|
| 1873 |
+
→ {Aux_S TV NP_O}
|
| 1874 |
+
VP_S_PROG
|
| 1875 |
+
→ {Aux_S_BE IV_IS}
|
| 1876 |
+
VP_S_PROG
|
| 1877 |
+
→ {Aux_S_BE TV_IS NP_O}
|
| 1878 |
+
VP_S_PAST
|
| 1879 |
+
→ {Aux_S_HAS IV_HAS}
|
| 1880 |
+
VP_S_PAST
|
| 1881 |
+
→ {Aux_S_HAS TV_HAS NP_O}
|
| 1882 |
+
VP_P_BARE
|
| 1883 |
+
→ {Aux_P IV}
|
| 1884 |
+
VP_P_BARE
|
| 1885 |
+
→ {Aux_P TV NP_O}
|
| 1886 |
+
VP_P_PROG
|
| 1887 |
+
→ {Aux_P_BE IV_IS}
|
| 1888 |
+
VP_P_PROG
|
| 1889 |
+
→ {Aux_P_BE TV_IS NP_O}
|
| 1890 |
+
VP_P_PAST
|
| 1891 |
+
→ {Aux_P_HAS IV_HAS}
|
| 1892 |
+
VP_P_PAST
|
| 1893 |
+
→ {Aux_P_HAS TV_HAS NP_O}
|
| 1894 |
+
RC_S_BARE
|
| 1895 |
+
→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV |
|
| 1896 |
+
Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P}
|
| 1897 |
+
RC_S_PROG
|
| 1898 |
+
→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P
|
| 1899 |
+
N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE
|
| 1900 |
+
TV_IS Det_P N_P}
|
| 1901 |
+
RC_S_PAST
|
| 1902 |
+
→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel
|
| 1903 |
+
Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S |
|
| 1904 |
+
Rel Aux_S_HAS TV_HAS Det_P N_P}
|
| 1905 |
+
RC_P_BARE
|
| 1906 |
+
→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV |
|
| 1907 |
+
Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P}
|
| 1908 |
+
RC_P_PROG
|
| 1909 |
+
→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P
|
| 1910 |
+
N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE
|
| 1911 |
+
TV_IS Det_P N_P}
|
| 1912 |
+
RC_P_PAST
|
| 1913 |
+
→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel
|
| 1914 |
+
Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S |
|
| 1915 |
+
Rel Aux_P_HAS TV_HAS Det_P N_P}
|
| 1916 |
+
Figure 6: CFG used to generate PREPOSE-ONE-AND-DELETE-ONE evaluation dataset
|
| 1917 |
+
|
| 1918 |
+
S
|
| 1919 |
+
→ {NP_M_S VP_M_S | NP_M_P VP_M_P}
|
| 1920 |
+
NP_M_S→ {Det_S N_S | Det_S N_S Prep Det_S N_S | Det_S N_S Prep Det_P N_P}
|
| 1921 |
+
NP_M_P→ {Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P}
|
| 1922 |
+
NP_O
|
| 1923 |
+
→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep
|
| 1924 |
+
Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P | Det_S
|
| 1925 |
+
N_S RC_S | Det_P N_P RC_P }
|
| 1926 |
+
VP_M_S→ {Aux_S IV }
|
| 1927 |
+
VP_M_S→ {Aux_S TV NP_O}
|
| 1928 |
+
VP_M_S→ {Aux_S_BE IV_IS}
|
| 1929 |
+
VP_M_S→ {Aux_S_BE TV_IS NP_O}
|
| 1930 |
+
VP_M_S→ {Aux_S_HAS IV_HAS}
|
| 1931 |
+
VP_M_S→ {Aux_S_HAS TV_HAS NP_O}
|
| 1932 |
+
VP_M_P→ {Aux_P IV}
|
| 1933 |
+
VP_M_P→ {Aux_P TV NP_O}
|
| 1934 |
+
VP_M_P→ {Aux_P_BE IV_IS}
|
| 1935 |
+
VP_M_P→ {Aux_P_BE TV_IS NP_O}
|
| 1936 |
+
VP_M_P→ {Aux_P_HAS IV_HAS}
|
| 1937 |
+
VP_M_P→ {Aux_P_HAS TV_HAS NP_O}
|
| 1938 |
+
RC_S
|
| 1939 |
+
→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV |
|
| 1940 |
+
Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P}
|
| 1941 |
+
RC_S
|
| 1942 |
+
→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P
|
| 1943 |
+
N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE
|
| 1944 |
+
TV_IS Det_P N_P}
|
| 1945 |
+
RC_S
|
| 1946 |
+
→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel
|
| 1947 |
+
Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S |
|
| 1948 |
+
Rel Aux_S_HAS TV_HAS Det_P N_P}
|
| 1949 |
+
RC_P
|
| 1950 |
+
→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV |
|
| 1951 |
+
Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P}
|
| 1952 |
+
RC_P
|
| 1953 |
+
→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P
|
| 1954 |
+
N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE
|
| 1955 |
+
TV_IS Det_P N_P}
|
| 1956 |
+
RC_P
|
| 1957 |
+
→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel
|
| 1958 |
+
Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S |
|
| 1959 |
+
Rel Aux_P_HAS TV_HAS Det_P N_P}
|
| 1960 |
+
Figure 7: CFG used to generate FIRST-AUX = MAIN-AUX evaluation dataset
|
| 1961 |
+
|
| 1962 |
+
S
|
| 1963 |
+
→ {NP_M_S VP_M_S | NP_M_P VP_M_P}
|
| 1964 |
+
NP_M_S→ {Det_S N_S | Det_S N_S Prep Det_S N_S | Det_S N_S Prep Det_P N_P}
|
| 1965 |
+
NP_M_P→ {Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P}
|
| 1966 |
+
NP_O
|
| 1967 |
+
→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep
|
| 1968 |
+
Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P | Det_S
|
| 1969 |
+
N_S RC_S | Det_P N_P RC_P }
|
| 1970 |
+
VP_M_S→ {Aux_S IV }
|
| 1971 |
+
VP_M_S→ {Aux_S TV NP_O}
|
| 1972 |
+
VP_M_S→ {Aux_S_BE IV_IS}
|
| 1973 |
+
VP_M_S→ {Aux_S_BE TV_IS NP_O}
|
| 1974 |
+
VP_M_S→ {Aux_S_HAS IV_HAS}
|
| 1975 |
+
VP_M_S→ {Aux_S_HAS TV_HAS NP_O}
|
| 1976 |
+
VP_M_P→ {Aux_P IV}
|
| 1977 |
+
VP_M_P→ {Aux_P TV NP_O}
|
| 1978 |
+
VP_M_P→ {Aux_P_BE IV_IS}
|
| 1979 |
+
VP_M_P→ {Aux_P_BE TV_IS NP_O}
|
| 1980 |
+
VP_M_P→ {Aux_P_HAS IV_HAS}
|
| 1981 |
+
VP_M_P→ {Aux_P_HAS TV_HAS NP_O}
|
| 1982 |
+
RC_S
|
| 1983 |
+
→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV |
|
| 1984 |
+
Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P}
|
| 1985 |
+
RC_S
|
| 1986 |
+
→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P
|
| 1987 |
+
N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE
|
| 1988 |
+
TV_IS Det_P N_P}
|
| 1989 |
+
RC_S
|
| 1990 |
+
→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel
|
| 1991 |
+
Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S |
|
| 1992 |
+
Rel Aux_S_HAS TV_HAS Det_P N_P}
|
| 1993 |
+
RC_P
|
| 1994 |
+
→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV |
|
| 1995 |
+
Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P}
|
| 1996 |
+
RC_P
|
| 1997 |
+
→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P
|
| 1998 |
+
N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE
|
| 1999 |
+
TV_IS Det_P N_P}
|
| 2000 |
+
RC_P
|
| 2001 |
+
→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel
|
| 2002 |
+
Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S |
|
| 2003 |
+
Rel Aux_P_HAS TV_HAS Det_P N_P}
|
| 2004 |
+
Figure 8: CFG used to generate FIRST-AUX ̸= MAIN-AUX evaluation dataset
|
| 2005 |
+
|
| 2006 |
+
Det_S
|
| 2007 |
+
→ {the | some | this }
|
| 2008 |
+
Det_P
|
| 2009 |
+
→ {the | some | those}
|
| 2010 |
+
N_S
|
| 2011 |
+
→ {baby | girl | boy | animal | child | person | horse }
|
| 2012 |
+
N_P
|
| 2013 |
+
→ {babies | girls | boys | animals | children | people | horses }
|
| 2014 |
+
IV
|
| 2015 |
+
→ {play | read | draw | sit | fall | talk | sleep | try | work | walk}
|
| 2016 |
+
IV_IS
|
| 2017 |
+
→ {playing | reading | drawing | sitting | falling | talking | sleeping | trying |
|
| 2018 |
+
working | walking}
|
| 2019 |
+
IV_HAS
|
| 2020 |
+
→ {played | read | drawn | sat | fallen | talked | slept | tried | worked | walked}
|
| 2021 |
+
TV
|
| 2022 |
+
→ {call | see | find | help | feed | know | pick | visit | watch | reach}
|
| 2023 |
+
TV_IS
|
| 2024 |
+
→ {calling | seeing | finding | helping | feeding | knowing | picking | visiting |
|
| 2025 |
+
watching | reaching}
|
| 2026 |
+
TV_HAS
|
| 2027 |
+
→ {called | seen | found | helped | fed | known | picked | visited | watched |
|
| 2028 |
+
reached}
|
| 2029 |
+
Aux_P
|
| 2030 |
+
→ {do | did | can | would | shall}
|
| 2031 |
+
Aux_S
|
| 2032 |
+
→ {does | did | can | would | shall}
|
| 2033 |
+
Aux_S_BE → {is | was}
|
| 2034 |
+
Aux_P_BE → {are | were}
|
| 2035 |
+
Aux_S_HAS→ {has}
|
| 2036 |
+
Aux_P_HAS→ {have}
|
| 2037 |
+
Prep
|
| 2038 |
+
→ {by | behind }
|
| 2039 |
+
Rel
|
| 2040 |
+
→ {who | that }
|
| 2041 |
+
Figure 9: Vocabulary used for the PREPOSE-ONE-AND-DELETE-ONE, FIRST-AUX ̸= MAIN-AUX, and FIRST-
|
| 2042 |
+
AUX = MAIN-AUX evaluation datasets
|
| 2043 |
+
Figure 10: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX eval-
|
| 2044 |
+
uation set for LSTMs first trained on next-word prediction and then question formation. The two leftmost bars in
|
| 2045 |
+
each cell show a First-vs-main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.
|
| 2046 |
+
|
| 2047 |
+
AuxX =
|
| 2048 |
+
AuxX =
|
| 2049 |
+
Auxx =
|
| 2050 |
+
AuxX =
|
| 2051 |
+
Auxx =
|
| 2052 |
+
AuxX =
|
| 2053 |
+
AuxX =
|
| 2054 |
+
AuxX =
|
| 2055 |
+
AuxX =
|
| 2056 |
+
AuxX =
|
| 2057 |
+
AuxX =
|
| 2058 |
+
was
|
| 2059 |
+
have
|
| 2060 |
+
can
|
| 2061 |
+
were
|
| 2062 |
+
shall
|
| 2063 |
+
p!p
|
| 2064 |
+
would
|
| 2065 |
+
does
|
| 2066 |
+
op
|
| 2067 |
+
are
|
| 2068 |
+
s
|
| 2069 |
+
1.0
|
| 2070 |
+
0.5
|
| 2071 |
+
0.0
|
| 2072 |
+
1
|
| 2073 |
+
11
|
| 2074 |
+
1.0
|
| 2075 |
+
0.5
|
| 2076 |
+
0.0
|
| 2077 |
+
1.0
|
| 2078 |
+
0.5
|
| 2079 |
+
0.0
|
| 2080 |
+
TIT
|
| 2081 |
+
1.0
|
| 2082 |
+
behavior consistent
|
| 2083 |
+
0.5
|
| 2084 |
+
0.0
|
| 2085 |
+
1.0
|
| 2086 |
+
0.5
|
| 2087 |
+
0.0
|
| 2088 |
+
1.0
|
| 2089 |
+
Comparison
|
| 2090 |
+
0.5
|
| 2091 |
+
First-vs-main
|
| 2092 |
+
0.0
|
| 2093 |
+
AuxY-vs-AuxX
|
| 2094 |
+
0.0
|
| 2095 |
+
1.0
|
| 2096 |
+
word
|
| 2097 |
+
0.5
|
| 2098 |
+
0.0
|
| 2099 |
+
1.0
|
| 2100 |
+
First
|
| 2101 |
+
0.5
|
| 2102 |
+
0.0
|
| 2103 |
+
1.0
|
| 2104 |
+
0.5
|
| 2105 |
+
0.0
|
| 2106 |
+
1.0
|
| 2107 |
+
0.5
|
| 2108 |
+
0.0Figure 11: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalu-
|
| 2109 |
+
ation set for LSTMs trained only on question formation. The two leftmost bars in each cell show a First-vs-main
|
| 2110 |
+
comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.
|
| 2111 |
+
Figure 12: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalua-
|
| 2112 |
+
tion set for Transformers first trained on next-word prediction and then question formation. The two leftmost bars
|
| 2113 |
+
in each cell show a First-vs-main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.
|
| 2114 |
+
|
| 2115 |
+
AuxX =
|
| 2116 |
+
AuxX =
|
| 2117 |
+
Auxx =
|
| 2118 |
+
AuxX =
|
| 2119 |
+
AuxX =
|
| 2120 |
+
AuxX =
|
| 2121 |
+
AuxX =
|
| 2122 |
+
AuxX =
|
| 2123 |
+
AuxX =
|
| 2124 |
+
AuxX =
|
| 2125 |
+
AuxX =
|
| 2126 |
+
was
|
| 2127 |
+
have
|
| 2128 |
+
can
|
| 2129 |
+
were
|
| 2130 |
+
shall
|
| 2131 |
+
p!p
|
| 2132 |
+
would
|
| 2133 |
+
does
|
| 2134 |
+
op
|
| 2135 |
+
are
|
| 2136 |
+
S
|
| 2137 |
+
1.0
|
| 2138 |
+
0.5
|
| 2139 |
+
0.0
|
| 2140 |
+
1
|
| 2141 |
+
1.0
|
| 2142 |
+
0.5
|
| 2143 |
+
0.0
|
| 2144 |
+
1.0
|
| 2145 |
+
0.5
|
| 2146 |
+
0.0
|
| 2147 |
+
1.0
|
| 2148 |
+
0.5
|
| 2149 |
+
1.0
|
| 2150 |
+
Comparison
|
| 2151 |
+
0.5
|
| 2152 |
+
First-vs-main
|
| 2153 |
+
0.0
|
| 2154 |
+
AuxY-vs-AuxX
|
| 2155 |
+
1.0
|
| 2156 |
+
0.5
|
| 2157 |
+
Aux
|
| 2158 |
+
0.0
|
| 2159 |
+
0.5
|
| 2160 |
+
0.0
|
| 2161 |
+
1.0
|
| 2162 |
+
0.5
|
| 2163 |
+
0'0
|
| 2164 |
+
1.0
|
| 2165 |
+
0.5
|
| 2166 |
+
0.0
|
| 2167 |
+
1.0
|
| 2168 |
+
0.5
|
| 2169 |
+
0.0 -AuxX =
|
| 2170 |
+
AuxX =
|
| 2171 |
+
AuxX =
|
| 2172 |
+
AuxX =
|
| 2173 |
+
Auxx =
|
| 2174 |
+
AuxX =
|
| 2175 |
+
AuxX =
|
| 2176 |
+
AuxX =
|
| 2177 |
+
AuxX =
|
| 2178 |
+
AuxX =
|
| 2179 |
+
AuxX =
|
| 2180 |
+
was
|
| 2181 |
+
have
|
| 2182 |
+
can
|
| 2183 |
+
were
|
| 2184 |
+
shall
|
| 2185 |
+
pIp
|
| 2186 |
+
would
|
| 2187 |
+
does
|
| 2188 |
+
op
|
| 2189 |
+
are
|
| 2190 |
+
s
|
| 2191 |
+
1.0
|
| 2192 |
+
0.5
|
| 2193 |
+
0.0
|
| 2194 |
+
1
|
| 2195 |
+
1.0
|
| 2196 |
+
0.5
|
| 2197 |
+
0.0
|
| 2198 |
+
T
|
| 2199 |
+
1.0
|
| 2200 |
+
0.5
|
| 2201 |
+
0.0
|
| 2202 |
+
1.0
|
| 2203 |
+
behavior consistent
|
| 2204 |
+
0.5
|
| 2205 |
+
Aux
|
| 2206 |
+
0.0
|
| 2207 |
+
05
|
| 2208 |
+
Comparison
|
| 2209 |
+
First-vs-main
|
| 2210 |
+
0.0
|
| 2211 |
+
AuxY-vs-AuxX
|
| 2212 |
+
AuxY
|
| 2213 |
+
0.0
|
| 2214 |
+
1.0
|
| 2215 |
+
word
|
| 2216 |
+
0.5
|
| 2217 |
+
0.0
|
| 2218 |
+
1.0
|
| 2219 |
+
First
|
| 2220 |
+
0.5
|
| 2221 |
+
0.0
|
| 2222 |
+
1.0
|
| 2223 |
+
0.5
|
| 2224 |
+
0.0
|
| 2225 |
+
1.0
|
| 2226 |
+
0.5
|
| 2227 |
+
0.0 -Figure 13: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalu-
|
| 2228 |
+
ation set for Transformers trained only on question formation. The two leftmost bars in each cell show a First-vs-
|
| 2229 |
+
main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.
|
| 2230 |
+
Text
|
| 2231 |
+
from
|
| 2232 |
+
CHILDES
|
| 2233 |
+
do you eat in nursery school too ? oh . it ’s nursery school . xxx . what do you
|
| 2234 |
+
get to eat ? what else do you get to eat besides a book ? do n’t play with those
|
| 2235 |
+
books . what else do you have ? what do you have in nursery school ? alright -
|
| 2236 |
+
you can take it . why do n’t you put the turtle back . well - you ’re not going to
|
| 2237 |
+
change his water now . tonight . tonight - you change it about ten times a day .
|
| 2238 |
+
no - you can do it later . forget it now . too bad . too bad . that poor turtle . you
|
| 2239 |
+
do n’t let him sit there five minutes . that was five minutes ?
|
| 2240 |
+
LSTM gen-
|
| 2241 |
+
erated text
|
| 2242 |
+
do you eat in nursery school too ? oh . it ’s nursery school . no ! tomorrow ’s
|
| 2243 |
+
Friday . right . yeah - yeah . yeah I know what . I told you that he liked
|
| 2244 |
+
snipsnap that way . well this is address the grayish kings . ok . what kind of
|
| 2245 |
+
bird do you want to play with ? what else do you want to do ? your crayons .
|
| 2246 |
+
got a circle . I ’m going to use these . there . remember ? ok do you want to put
|
| 2247 |
+
all of these there ? ok let ’s put everything away . what do you want to do ? you
|
| 2248 |
+
want to play in the box ? it ’s just like a ball because it has a ball on it .
|
| 2249 |
+
Transformer
|
| 2250 |
+
generated
|
| 2251 |
+
text
|
| 2252 |
+
do you eat in nursery school too ? oh . it ’s nursery school . mm-hm . and it
|
| 2253 |
+
looks like a little girl . mm-hm . like a little girl . hm , mm-hm . looks like she
|
| 2254 |
+
’s concerned about baseball this is a picture of a double football player . mm-hm
|
| 2255 |
+
. um , it has a uniform on it . mm-hm . anything more ? no . ok great . got a hat
|
| 2256 |
+
on . and a flower and a little flower . remember that ? do you remember that
|
| 2257 |
+
word ? what do you remember that word ? do you remember what it word says
|
| 2258 |
+
? it says seven - eight . what ’s that word ? it says eight . look at that word .
|
| 2259 |
+
three - four - six ... eight - nine ...
|
| 2260 |
+
Figure 14: Comparison of text generated by the LSTM and Transformer models with a block of text chosen
|
| 2261 |
+
randomly from the training data. The LSTMs and Transformers were both seeded with the first three sentences
|
| 2262 |
+
of the text taken from CHILDES, which is the underlined in the two model generated texts. Note that neither of
|
| 2263 |
+
the model generated texts were cherry picked either for quality or to be representative of the models’ usual output:
|
| 2264 |
+
rather they were the first things they generated when seeded with the above underlined portion.
|
| 2265 |
+
|
| 2266 |
+
AuxX =
|
| 2267 |
+
AuxX =
|
| 2268 |
+
AuxX =
|
| 2269 |
+
AuxX =
|
| 2270 |
+
AuxX =
|
| 2271 |
+
AuxX =
|
| 2272 |
+
AuxX =
|
| 2273 |
+
AuxX =
|
| 2274 |
+
AuxX =
|
| 2275 |
+
AuxX =
|
| 2276 |
+
AuxX =
|
| 2277 |
+
was
|
| 2278 |
+
have
|
| 2279 |
+
can
|
| 2280 |
+
were
|
| 2281 |
+
shall
|
| 2282 |
+
pIp
|
| 2283 |
+
pinom
|
| 2284 |
+
does
|
| 2285 |
+
op
|
| 2286 |
+
are
|
| 2287 |
+
s
|
| 2288 |
+
1.0
|
| 2289 |
+
0.5
|
| 2290 |
+
0.0
|
| 2291 |
+
1.0
|
| 2292 |
+
0.5
|
| 2293 |
+
rule
|
| 2294 |
+
0.0
|
| 2295 |
+
1.0
|
| 2296 |
+
I behavior consistent with
|
| 2297 |
+
881
|
| 2298 |
+
18
|
| 2299 |
+
AUXY
|
| 2300 |
+
0.0
|
| 2301 |
+
Comparison
|
| 2302 |
+
First-vs-main
|
| 2303 |
+
AuxY-vs-AuxX
|
| 2304 |
+
0.0
|
| 2305 |
+
1.0
|
| 2306 |
+
0.5
|
| 2307 |
+
0.0
|
| 2308 |
+
1.0
|
| 2309 |
+
0.5
|
| 2310 |
+
0.0
|
QtFJT4oBgHgl3EQfJyy0/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|