Koulb commited on
Commit
987ee29
·
verified ·
1 Parent(s): a96b38e

2b011e550cd872dfc87718b96b1b620c064ee96eb04a49c8e4e7537092f3869d

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/from_se3_transformer/__pycache__/representations.cpython-312.pyc +0 -0
  3. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/from_se3_transformer/license.txt +24 -0
  4. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/from_se3_transformer/representations.py +204 -0
  5. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/graph.py +934 -0
  6. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/__init__.py +1 -0
  7. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/__pycache__/__init__.cpython-312.pyc +0 -0
  8. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/__pycache__/pred_ham.cpython-312.pyc +0 -0
  9. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/band_config.json +8 -0
  10. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/dense_calc.jl +234 -0
  11. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/dense_calc.py +277 -0
  12. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/inference_default.ini +23 -0
  13. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/local_coordinate.jl +79 -0
  14. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/pred_ham.py +365 -0
  15. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/restore_blocks.jl +115 -0
  16. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/sparse_calc.jl +412 -0
  17. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/kernel.py +844 -0
  18. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/model.py +676 -0
  19. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__init__.py +4 -0
  20. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__pycache__/__init__.cpython-312.pyc +0 -0
  21. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__pycache__/abacus_get_data.cpython-312.pyc +0 -0
  22. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__pycache__/get_rc.cpython-312.pyc +0 -0
  23. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__pycache__/openmx_parse.cpython-312.pyc +0 -0
  24. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__pycache__/siesta_get_data.cpython-312.pyc +0 -0
  25. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/abacus_get_data.py +340 -0
  26. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/aims_get_data.jl +477 -0
  27. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/get_rc.py +165 -0
  28. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/openmx_get_data.jl +471 -0
  29. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/openmx_parse.py +425 -0
  30. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/periodic_table.json +0 -0
  31. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/preprocess_default.ini +20 -0
  32. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/siesta_get_data.py +336 -0
  33. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/rotate.py +277 -0
  34. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/__init__.py +0 -0
  35. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/__pycache__/__init__.cpython-312.pyc +0 -0
  36. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/__pycache__/preprocess.cpython-312.pyc +0 -0
  37. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/__pycache__/train.cpython-312.pyc +0 -0
  38. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/evaluate.py +173 -0
  39. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/inference.py +157 -0
  40. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/preprocess.py +199 -0
  41. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/train.py +23 -0
  42. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/utils.py +213 -0
  43. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/stderr.txt +0 -0
  44. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/rc.h5 +3 -0
  45. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/rh.h5 +3 -0
  46. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/rh_pred.h5 +3 -0
  47. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/rlat.dat +3 -0
  48. example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/site_positions.dat +3 -0
  49. example/diamond/1_data_prepare/data/bands/sc/reconstruction/calc.py +11 -0
  50. example/diamond/1_data_prepare/data/bands/sc/reconstruction/hpro.log +59 -0
.gitattributes CHANGED
@@ -58,3 +58,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
58
  # Video files - compressed
59
  *.mp4 filter=lfs diff=lfs merge=lfs -text
60
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
58
  # Video files - compressed
59
  *.mp4 filter=lfs diff=lfs merge=lfs -text
60
  *.webm filter=lfs diff=lfs merge=lfs -text
61
+ example/diamond/1_data_prepare/data/bands/sc/scf/VSC filter=lfs diff=lfs merge=lfs -text
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/from_se3_transformer/__pycache__/representations.cpython-312.pyc ADDED
Binary file (8.14 kB). View file
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/from_se3_transformer/license.txt ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The code in this folder was obtained from "https://github.com/mariogeiger/se3cnn/", which has the following license:
2
+
3
+
4
+ MIT License
5
+
6
+ Copyright (c) 2019 Mario Geiger
7
+
8
+ Permission is hereby granted, free of charge, to any person obtaining a copy
9
+ of this software and associated documentation files (the "Software"), to deal
10
+ in the Software without restriction, including without limitation the rights
11
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
12
+ copies of the Software, and to permit persons to whom the Software is
13
+ furnished to do so, subject to the following conditions:
14
+
15
+ The above copyright notice and this permission notice shall be included in all
16
+ copies or substantial portions of the Software.
17
+
18
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
19
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
20
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
21
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
22
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
23
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
24
+ SOFTWARE.
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/from_se3_transformer/representations.py ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import numpy as np
3
+
4
+
5
+ def semifactorial(x):
6
+ """Compute the semifactorial function x!!.
7
+
8
+ x!! = x * (x-2) * (x-4) *...
9
+
10
+ Args:
11
+ x: positive int
12
+ Returns:
13
+ float for x!!
14
+ """
15
+ y = 1.
16
+ for n in range(x, 1, -2):
17
+ y *= n
18
+ return y
19
+
20
+
21
+ def pochhammer(x, k):
22
+ """Compute the pochhammer symbol (x)_k.
23
+
24
+ (x)_k = x * (x+1) * (x+2) *...* (x+k-1)
25
+
26
+ Args:
27
+ x: positive int
28
+ Returns:
29
+ float for (x)_k
30
+ """
31
+ xf = float(x)
32
+ for n in range(x+1, x+k):
33
+ xf *= n
34
+ return xf
35
+
36
+ def lpmv(l, m, x):
37
+ """Associated Legendre function including Condon-Shortley phase.
38
+
39
+ Args:
40
+ m: int order
41
+ l: int degree
42
+ x: float argument tensor
43
+ Returns:
44
+ tensor of x-shape
45
+ """
46
+ m_abs = abs(m)
47
+ if m_abs > l:
48
+ return torch.zeros_like(x)
49
+
50
+ # Compute P_m^m
51
+ yold = ((-1)**m_abs * semifactorial(2*m_abs-1)) * torch.pow(1-x*x, m_abs/2)
52
+
53
+ # Compute P_{m+1}^m
54
+ if m_abs != l:
55
+ y = x * (2*m_abs+1) * yold
56
+ else:
57
+ y = yold
58
+
59
+ # Compute P_{l}^m from recursion in P_{l-1}^m and P_{l-2}^m
60
+ for i in range(m_abs+2, l+1):
61
+ tmp = y
62
+ # Inplace speedup
63
+ y = ((2*i-1) / (i-m_abs)) * x * y
64
+ y -= ((i+m_abs-1)/(i-m_abs)) * yold
65
+ yold = tmp
66
+
67
+ if m < 0:
68
+ y *= ((-1)**m / pochhammer(l+m+1, -2*m))
69
+
70
+ return y
71
+
72
+ def tesseral_harmonics(l, m, theta=0., phi=0.):
73
+ """Tesseral spherical harmonic with Condon-Shortley phase.
74
+
75
+ The Tesseral spherical harmonics are also known as the real spherical
76
+ harmonics.
77
+
78
+ Args:
79
+ l: int for degree
80
+ m: int for order, where -l <= m < l
81
+ theta: collatitude or polar angle
82
+ phi: longitude or azimuth
83
+ Returns:
84
+ tensor of shape theta
85
+ """
86
+ assert abs(m) <= l, "absolute value of order m must be <= degree l"
87
+
88
+ N = np.sqrt((2*l+1) / (4*np.pi))
89
+ leg = lpmv(l, abs(m), torch.cos(theta))
90
+ if m == 0:
91
+ return N*leg
92
+ elif m > 0:
93
+ Y = torch.cos(m*phi) * leg
94
+ else:
95
+ Y = torch.sin(abs(m)*phi) * leg
96
+ N *= np.sqrt(2. / pochhammer(l-abs(m)+1, 2*abs(m)))
97
+ Y *= N
98
+ return Y
99
+
100
+ class SphericalHarmonics(object):
101
+ def __init__(self):
102
+ self.leg = {}
103
+
104
+ def clear(self):
105
+ self.leg = {}
106
+
107
+ def negative_lpmv(self, l, m, y):
108
+ """Compute negative order coefficients"""
109
+ if m < 0:
110
+ y *= ((-1)**m / pochhammer(l+m+1, -2*m))
111
+ return y
112
+
113
+ def lpmv(self, l, m, x):
114
+ """Associated Legendre function including Condon-Shortley phase.
115
+
116
+ Args:
117
+ m: int order
118
+ l: int degree
119
+ x: float argument tensor
120
+ Returns:
121
+ tensor of x-shape
122
+ """
123
+ # Check memoized versions
124
+ m_abs = abs(m)
125
+ if (l,m) in self.leg:
126
+ return self.leg[(l,m)]
127
+ elif m_abs > l:
128
+ return None
129
+ elif l == 0:
130
+ self.leg[(l,m)] = torch.ones_like(x)
131
+ return self.leg[(l,m)]
132
+
133
+ # Check if on boundary else recurse solution down to boundary
134
+ if m_abs == l:
135
+ # Compute P_m^m
136
+ y = (-1)**m_abs * semifactorial(2*m_abs-1)
137
+ y *= torch.pow(1-x*x, m_abs/2)
138
+ self.leg[(l,m)] = self.negative_lpmv(l, m, y)
139
+ return self.leg[(l,m)]
140
+ else:
141
+ # Recursively precompute lower degree harmonics
142
+ self.lpmv(l-1, m, x)
143
+
144
+ # Compute P_{l}^m from recursion in P_{l-1}^m and P_{l-2}^m
145
+ # Inplace speedup
146
+ y = ((2*l-1) / (l-m_abs)) * x * self.lpmv(l-1, m_abs, x)
147
+ if l - m_abs > 1:
148
+ y -= ((l+m_abs-1)/(l-m_abs)) * self.leg[(l-2, m_abs)]
149
+ #self.leg[(l, m_abs)] = y
150
+
151
+ if m < 0:
152
+ y = self.negative_lpmv(l, m, y)
153
+ self.leg[(l,m)] = y
154
+
155
+ return self.leg[(l,m)]
156
+
157
+ def get_element(self, l, m, theta, phi):
158
+ """Tesseral spherical harmonic with Condon-Shortley phase.
159
+
160
+ The Tesseral spherical harmonics are also known as the real spherical
161
+ harmonics.
162
+
163
+ Args:
164
+ l: int for degree
165
+ m: int for order, where -l <= m < l
166
+ theta: collatitude or polar angle
167
+ phi: longitude or azimuth
168
+ Returns:
169
+ tensor of shape theta
170
+ """
171
+ assert abs(m) <= l, "absolute value of order m must be <= degree l"
172
+
173
+ N = np.sqrt((2*l+1) / (4*np.pi))
174
+ leg = self.lpmv(l, abs(m), torch.cos(theta))
175
+ if m == 0:
176
+ return N*leg
177
+ elif m > 0:
178
+ Y = torch.cos(m*phi) * leg
179
+ else:
180
+ Y = torch.sin(abs(m)*phi) * leg
181
+ N *= np.sqrt(2. / pochhammer(l-abs(m)+1, 2*abs(m)))
182
+ Y *= N
183
+ return Y
184
+
185
+ def get(self, l, theta, phi, refresh=True):
186
+ """Tesseral harmonic with Condon-Shortley phase.
187
+
188
+ The Tesseral spherical harmonics are also known as the real spherical
189
+ harmonics.
190
+
191
+ Args:
192
+ l: int for degree
193
+ theta: collatitude or polar angle
194
+ phi: longitude or azimuth
195
+ Returns:
196
+ tensor of shape [*theta.shape, 2*l+1]
197
+ """
198
+ results = []
199
+ if refresh:
200
+ self.clear()
201
+ for m in range(-l, l+1):
202
+ results.append(self.get_element(l, m, theta, phi))
203
+ return torch.stack(results, -1)
204
+
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/graph.py ADDED
@@ -0,0 +1,934 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import itertools
3
+ import os
4
+ import json
5
+ import warnings
6
+ import math
7
+
8
+ import torch
9
+ import torch_geometric
10
+ from torch_geometric.data import Data, Batch
11
+ import numpy as np
12
+ import h5py
13
+
14
+ from .model import get_spherical_from_cartesian, SphericalHarmonics
15
+ from .from_pymatgen import find_neighbors, _one_to_three, _compute_cube_index, _three_to_one
16
+
17
+
18
+ """
19
+ The function _spherical_harmonics below is come from "https://github.com/e3nn/e3nn", which has the MIT License below
20
+
21
+ ---------------------------------------------------------------------------
22
+ MIT License
23
+
24
+ Euclidean neural networks (e3nn) Copyright (c) 2020, The Regents of the
25
+ University of California, through Lawrence Berkeley National Laboratory
26
+ (subject to receipt of any required approvals from the U.S. Dept. of Energy),
27
+ Ecole Polytechnique Federale de Lausanne (EPFL), Free University of Berlin
28
+ and Kostiantyn Lapchevskyi. All rights reserved.
29
+
30
+ Permission is hereby granted, free of charge, to any person obtaining a copy
31
+ of this software and associated documentation files (the "Software"), to deal
32
+ in the Software without restriction, including without limitation the rights to use,
33
+ copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the
34
+ Software, and to permit persons to whom the Software is furnished to do so,
35
+ subject to the following conditions:
36
+
37
+ The above copyright notice and this permission notice shall be included in all
38
+ copies or substantial portions of the Software.
39
+
40
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
41
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
42
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
43
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
44
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
45
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
46
+ SOFTWARE.
47
+ """
48
+ def _spherical_harmonics(lmax: int, x: torch.Tensor, y: torch.Tensor, z: torch.Tensor) -> torch.Tensor:
49
+ sh_0_0 = torch.ones_like(x)
50
+ if lmax == 0:
51
+ return torch.stack([
52
+ sh_0_0,
53
+ ], dim=-1)
54
+
55
+ sh_1_0 = x
56
+ sh_1_1 = y
57
+ sh_1_2 = z
58
+ if lmax == 1:
59
+ return torch.stack([
60
+ sh_0_0,
61
+ sh_1_0, sh_1_1, sh_1_2
62
+ ], dim=-1)
63
+
64
+ sh_2_0 = math.sqrt(3.0) * x * z
65
+ sh_2_1 = math.sqrt(3.0) * x * y
66
+ y2 = y.pow(2)
67
+ x2z2 = x.pow(2) + z.pow(2)
68
+ sh_2_2 = y2 - 0.5 * x2z2
69
+ sh_2_3 = math.sqrt(3.0) * y * z
70
+ sh_2_4 = math.sqrt(3.0) / 2.0 * (z.pow(2) - x.pow(2))
71
+
72
+ if lmax == 2:
73
+ return torch.stack([
74
+ sh_0_0,
75
+ sh_1_0, sh_1_1, sh_1_2,
76
+ sh_2_0, sh_2_1, sh_2_2, sh_2_3, sh_2_4
77
+ ], dim=-1)
78
+
79
+ sh_3_0 = math.sqrt(5.0 / 6.0) * (sh_2_0 * z + sh_2_4 * x)
80
+ sh_3_1 = math.sqrt(5.0) * sh_2_0 * y
81
+ sh_3_2 = math.sqrt(3.0 / 8.0) * (4.0 * y2 - x2z2) * x
82
+ sh_3_3 = 0.5 * y * (2.0 * y2 - 3.0 * x2z2)
83
+ sh_3_4 = math.sqrt(3.0 / 8.0) * z * (4.0 * y2 - x2z2)
84
+ sh_3_5 = math.sqrt(5.0) * sh_2_4 * y
85
+ sh_3_6 = math.sqrt(5.0 / 6.0) * (sh_2_4 * z - sh_2_0 * x)
86
+
87
+ if lmax == 3:
88
+ return torch.stack([
89
+ sh_0_0,
90
+ sh_1_0, sh_1_1, sh_1_2,
91
+ sh_2_0, sh_2_1, sh_2_2, sh_2_3, sh_2_4,
92
+ sh_3_0, sh_3_1, sh_3_2, sh_3_3, sh_3_4, sh_3_5, sh_3_6
93
+ ], dim=-1)
94
+
95
+ sh_4_0 = 0.935414346693485*sh_3_0*z + 0.935414346693485*sh_3_6*x
96
+ sh_4_1 = 0.661437827766148*sh_3_0*y + 0.810092587300982*sh_3_1*z + 0.810092587300983*sh_3_5*x
97
+ sh_4_2 = -0.176776695296637*sh_3_0*z + 0.866025403784439*sh_3_1*y + 0.684653196881458*sh_3_2*z + 0.684653196881457*sh_3_4*x + 0.176776695296637*sh_3_6*x
98
+ sh_4_3 = -0.306186217847897*sh_3_1*z + 0.968245836551855*sh_3_2*y + 0.790569415042095*sh_3_3*x + 0.306186217847897*sh_3_5*x
99
+ sh_4_4 = -0.612372435695795*sh_3_2*x + sh_3_3*y - 0.612372435695795*sh_3_4*z
100
+ sh_4_5 = -0.306186217847897*sh_3_1*x + 0.790569415042096*sh_3_3*z + 0.968245836551854*sh_3_4*y - 0.306186217847897*sh_3_5*z
101
+ sh_4_6 = -0.176776695296637*sh_3_0*x - 0.684653196881457*sh_3_2*x + 0.684653196881457*sh_3_4*z + 0.866025403784439*sh_3_5*y - 0.176776695296637*sh_3_6*z
102
+ sh_4_7 = -0.810092587300982*sh_3_1*x + 0.810092587300982*sh_3_5*z + 0.661437827766148*sh_3_6*y
103
+ sh_4_8 = -0.935414346693485*sh_3_0*x + 0.935414346693486*sh_3_6*z
104
+ if lmax == 4:
105
+ return torch.stack([
106
+ sh_0_0,
107
+ sh_1_0, sh_1_1, sh_1_2,
108
+ sh_2_0, sh_2_1, sh_2_2, sh_2_3, sh_2_4,
109
+ sh_3_0, sh_3_1, sh_3_2, sh_3_3, sh_3_4, sh_3_5, sh_3_6,
110
+ sh_4_0, sh_4_1, sh_4_2, sh_4_3, sh_4_4, sh_4_5, sh_4_6, sh_4_7, sh_4_8
111
+ ], dim=-1)
112
+
113
+ sh_5_0 = 0.948683298050513*sh_4_0*z + 0.948683298050513*sh_4_8*x
114
+ sh_5_1 = 0.6*sh_4_0*y + 0.848528137423857*sh_4_1*z + 0.848528137423858*sh_4_7*x
115
+ sh_5_2 = -0.14142135623731*sh_4_0*z + 0.8*sh_4_1*y + 0.748331477354788*sh_4_2*z + 0.748331477354788*sh_4_6*x + 0.14142135623731*sh_4_8*x
116
+ sh_5_3 = -0.244948974278318*sh_4_1*z + 0.916515138991168*sh_4_2*y + 0.648074069840786*sh_4_3*z + 0.648074069840787*sh_4_5*x + 0.244948974278318*sh_4_7*x
117
+ sh_5_4 = -0.346410161513776*sh_4_2*z + 0.979795897113272*sh_4_3*y + 0.774596669241484*sh_4_4*x + 0.346410161513776*sh_4_6*x
118
+ sh_5_5 = -0.632455532033676*sh_4_3*x + sh_4_4*y - 0.632455532033676*sh_4_5*z
119
+ sh_5_6 = -0.346410161513776*sh_4_2*x + 0.774596669241483*sh_4_4*z + 0.979795897113273*sh_4_5*y - 0.346410161513776*sh_4_6*z
120
+ sh_5_7 = -0.244948974278318*sh_4_1*x - 0.648074069840787*sh_4_3*x + 0.648074069840786*sh_4_5*z + 0.916515138991169*sh_4_6*y - 0.244948974278318*sh_4_7*z
121
+ sh_5_8 = -0.141421356237309*sh_4_0*x - 0.748331477354788*sh_4_2*x + 0.748331477354788*sh_4_6*z + 0.8*sh_4_7*y - 0.141421356237309*sh_4_8*z
122
+ sh_5_9 = -0.848528137423857*sh_4_1*x + 0.848528137423857*sh_4_7*z + 0.6*sh_4_8*y
123
+ sh_5_10 = -0.948683298050513*sh_4_0*x + 0.948683298050513*sh_4_8*z
124
+ if lmax == 5:
125
+ return torch.stack([
126
+ sh_0_0,
127
+ sh_1_0, sh_1_1, sh_1_2,
128
+ sh_2_0, sh_2_1, sh_2_2, sh_2_3, sh_2_4,
129
+ sh_3_0, sh_3_1, sh_3_2, sh_3_3, sh_3_4, sh_3_5, sh_3_6,
130
+ sh_4_0, sh_4_1, sh_4_2, sh_4_3, sh_4_4, sh_4_5, sh_4_6, sh_4_7, sh_4_8,
131
+ sh_5_0, sh_5_1, sh_5_2, sh_5_3, sh_5_4, sh_5_5, sh_5_6, sh_5_7, sh_5_8, sh_5_9, sh_5_10
132
+ ], dim=-1)
133
+
134
+ sh_6_0 = 0.957427107756337*sh_5_0*z + 0.957427107756338*sh_5_10*x
135
+ sh_6_1 = 0.552770798392565*sh_5_0*y + 0.874007373475125*sh_5_1*z + 0.874007373475125*sh_5_9*x
136
+ sh_6_2 = -0.117851130197757*sh_5_0*z + 0.745355992499929*sh_5_1*y + 0.117851130197758*sh_5_10*x + 0.790569415042094*sh_5_2*z + 0.790569415042093*sh_5_8*x
137
+ sh_6_3 = -0.204124145231931*sh_5_1*z + 0.866025403784437*sh_5_2*y + 0.707106781186546*sh_5_3*z + 0.707106781186547*sh_5_7*x + 0.204124145231931*sh_5_9*x
138
+ sh_6_4 = -0.288675134594813*sh_5_2*z + 0.942809041582062*sh_5_3*y + 0.623609564462323*sh_5_4*z + 0.623609564462322*sh_5_6*x + 0.288675134594812*sh_5_8*x
139
+ sh_6_5 = -0.372677996249965*sh_5_3*z + 0.986013297183268*sh_5_4*y + 0.763762615825972*sh_5_5*x + 0.372677996249964*sh_5_7*x
140
+ sh_6_6 = -0.645497224367901*sh_5_4*x + sh_5_5*y - 0.645497224367902*sh_5_6*z
141
+ sh_6_7 = -0.372677996249964*sh_5_3*x + 0.763762615825972*sh_5_5*z + 0.986013297183269*sh_5_6*y - 0.372677996249965*sh_5_7*z
142
+ sh_6_8 = -0.288675134594813*sh_5_2*x - 0.623609564462323*sh_5_4*x + 0.623609564462323*sh_5_6*z + 0.942809041582062*sh_5_7*y - 0.288675134594812*sh_5_8*z
143
+ sh_6_9 = -0.20412414523193*sh_5_1*x - 0.707106781186546*sh_5_3*x + 0.707106781186547*sh_5_7*z + 0.866025403784438*sh_5_8*y - 0.204124145231931*sh_5_9*z
144
+ sh_6_10 = -0.117851130197757*sh_5_0*x - 0.117851130197757*sh_5_10*z - 0.790569415042094*sh_5_2*x + 0.790569415042093*sh_5_8*z + 0.745355992499929*sh_5_9*y
145
+ sh_6_11 = -0.874007373475124*sh_5_1*x + 0.552770798392566*sh_5_10*y + 0.874007373475125*sh_5_9*z
146
+ sh_6_12 = -0.957427107756337*sh_5_0*x + 0.957427107756336*sh_5_10*z
147
+ if lmax == 6:
148
+ return torch.stack([
149
+ sh_0_0,
150
+ sh_1_0, sh_1_1, sh_1_2,
151
+ sh_2_0, sh_2_1, sh_2_2, sh_2_3, sh_2_4,
152
+ sh_3_0, sh_3_1, sh_3_2, sh_3_3, sh_3_4, sh_3_5, sh_3_6,
153
+ sh_4_0, sh_4_1, sh_4_2, sh_4_3, sh_4_4, sh_4_5, sh_4_6, sh_4_7, sh_4_8,
154
+ sh_5_0, sh_5_1, sh_5_2, sh_5_3, sh_5_4, sh_5_5, sh_5_6, sh_5_7, sh_5_8, sh_5_9, sh_5_10,
155
+ sh_6_0, sh_6_1, sh_6_2, sh_6_3, sh_6_4, sh_6_5, sh_6_6, sh_6_7, sh_6_8, sh_6_9, sh_6_10, sh_6_11, sh_6_12
156
+ ], dim=-1)
157
+
158
+ sh_7_0 = 0.963624111659433*sh_6_0*z + 0.963624111659432*sh_6_12*x
159
+ sh_7_1 = 0.515078753637713*sh_6_0*y + 0.892142571199771*sh_6_1*z + 0.892142571199771*sh_6_11*x
160
+ sh_7_2 = -0.101015254455221*sh_6_0*z + 0.699854212223765*sh_6_1*y + 0.82065180664829*sh_6_10*x + 0.101015254455222*sh_6_12*x + 0.82065180664829*sh_6_2*z
161
+ sh_7_3 = -0.174963553055942*sh_6_1*z + 0.174963553055941*sh_6_11*x + 0.82065180664829*sh_6_2*y + 0.749149177264394*sh_6_3*z + 0.749149177264394*sh_6_9*x
162
+ sh_7_4 = 0.247435829652697*sh_6_10*x - 0.247435829652697*sh_6_2*z + 0.903507902905251*sh_6_3*y + 0.677630927178938*sh_6_4*z + 0.677630927178938*sh_6_8*x
163
+ sh_7_5 = -0.31943828249997*sh_6_3*z + 0.95831484749991*sh_6_4*y + 0.606091526731326*sh_6_5*z + 0.606091526731326*sh_6_7*x + 0.31943828249997*sh_6_9*x
164
+ sh_7_6 = -0.391230398217976*sh_6_4*z + 0.989743318610787*sh_6_5*y + 0.755928946018454*sh_6_6*x + 0.391230398217975*sh_6_8*x
165
+ sh_7_7 = -0.654653670707977*sh_6_5*x + sh_6_6*y - 0.654653670707978*sh_6_7*z
166
+ sh_7_8 = -0.391230398217976*sh_6_4*x + 0.755928946018455*sh_6_6*z + 0.989743318610787*sh_6_7*y - 0.391230398217975*sh_6_8*z
167
+ sh_7_9 = -0.31943828249997*sh_6_3*x - 0.606091526731327*sh_6_5*x + 0.606091526731326*sh_6_7*z + 0.95831484749991*sh_6_8*y - 0.31943828249997*sh_6_9*z
168
+ sh_7_10 = -0.247435829652697*sh_6_10*z - 0.247435829652697*sh_6_2*x - 0.677630927178938*sh_6_4*x + 0.677630927178938*sh_6_8*z + 0.903507902905251*sh_6_9*y
169
+ sh_7_11 = -0.174963553055942*sh_6_1*x + 0.820651806648289*sh_6_10*y - 0.174963553055941*sh_6_11*z - 0.749149177264394*sh_6_3*x + 0.749149177264394*sh_6_9*z
170
+ sh_7_12 = -0.101015254455221*sh_6_0*x + 0.82065180664829*sh_6_10*z + 0.699854212223766*sh_6_11*y - 0.101015254455221*sh_6_12*z - 0.82065180664829*sh_6_2*x
171
+ sh_7_13 = -0.892142571199772*sh_6_1*x + 0.892142571199772*sh_6_11*z + 0.515078753637713*sh_6_12*y
172
+ sh_7_14 = -0.963624111659431*sh_6_0*x + 0.963624111659433*sh_6_12*z
173
+ if lmax == 7:
174
+ return torch.stack([
175
+ sh_0_0,
176
+ sh_1_0, sh_1_1, sh_1_2,
177
+ sh_2_0, sh_2_1, sh_2_2, sh_2_3, sh_2_4,
178
+ sh_3_0, sh_3_1, sh_3_2, sh_3_3, sh_3_4, sh_3_5, sh_3_6,
179
+ sh_4_0, sh_4_1, sh_4_2, sh_4_3, sh_4_4, sh_4_5, sh_4_6, sh_4_7, sh_4_8,
180
+ sh_5_0, sh_5_1, sh_5_2, sh_5_3, sh_5_4, sh_5_5, sh_5_6, sh_5_7, sh_5_8, sh_5_9, sh_5_10,
181
+ sh_6_0, sh_6_1, sh_6_2, sh_6_3, sh_6_4, sh_6_5, sh_6_6, sh_6_7, sh_6_8, sh_6_9, sh_6_10, sh_6_11, sh_6_12,
182
+ sh_7_0, sh_7_1, sh_7_2, sh_7_3, sh_7_4, sh_7_5, sh_7_6, sh_7_7, sh_7_8, sh_7_9, sh_7_10, sh_7_11, sh_7_12, sh_7_13, sh_7_14
183
+ ], dim=-1)
184
+
185
+ sh_8_0 = 0.968245836551854*sh_7_0*z + 0.968245836551853*sh_7_14*x
186
+ sh_8_1 = 0.484122918275928*sh_7_0*y + 0.90571104663684*sh_7_1*z + 0.90571104663684*sh_7_13*x
187
+ sh_8_2 = -0.0883883476483189*sh_7_0*z + 0.661437827766148*sh_7_1*y + 0.843171097702002*sh_7_12*x + 0.088388347648318*sh_7_14*x + 0.843171097702003*sh_7_2*z
188
+ sh_8_3 = -0.153093108923948*sh_7_1*z + 0.7806247497998*sh_7_11*x + 0.153093108923949*sh_7_13*x + 0.7806247497998*sh_7_2*y + 0.780624749799799*sh_7_3*z
189
+ sh_8_4 = 0.718070330817253*sh_7_10*x + 0.21650635094611*sh_7_12*x - 0.21650635094611*sh_7_2*z + 0.866025403784439*sh_7_3*y + 0.718070330817254*sh_7_4*z
190
+ sh_8_5 = 0.279508497187474*sh_7_11*x - 0.279508497187474*sh_7_3*z + 0.927024810886958*sh_7_4*y + 0.655505530106345*sh_7_5*z + 0.655505530106344*sh_7_9*x
191
+ sh_8_6 = 0.342326598440729*sh_7_10*x - 0.342326598440729*sh_7_4*z + 0.968245836551854*sh_7_5*y + 0.592927061281572*sh_7_6*z + 0.592927061281571*sh_7_8*x
192
+ sh_8_7 = -0.405046293650492*sh_7_5*z + 0.992156741649221*sh_7_6*y + 0.75*sh_7_7*x + 0.405046293650492*sh_7_9*x
193
+ sh_8_8 = -0.661437827766148*sh_7_6*x + sh_7_7*y - 0.661437827766148*sh_7_8*z
194
+ sh_8_9 = -0.405046293650492*sh_7_5*x + 0.75*sh_7_7*z + 0.992156741649221*sh_7_8*y - 0.405046293650491*sh_7_9*z
195
+ sh_8_10 = -0.342326598440728*sh_7_10*z - 0.342326598440729*sh_7_4*x - 0.592927061281571*sh_7_6*x + 0.592927061281571*sh_7_8*z + 0.968245836551855*sh_7_9*y
196
+ sh_8_11 = 0.927024810886958*sh_7_10*y - 0.279508497187474*sh_7_11*z - 0.279508497187474*sh_7_3*x - 0.655505530106345*sh_7_5*x + 0.655505530106345*sh_7_9*z
197
+ sh_8_12 = 0.718070330817253*sh_7_10*z + 0.866025403784439*sh_7_11*y - 0.216506350946109*sh_7_12*z - 0.216506350946109*sh_7_2*x - 0.718070330817254*sh_7_4*x
198
+ sh_8_13 = -0.153093108923948*sh_7_1*x + 0.7806247497998*sh_7_11*z + 0.7806247497998*sh_7_12*y - 0.153093108923948*sh_7_13*z - 0.780624749799799*sh_7_3*x
199
+ sh_8_14 = -0.0883883476483179*sh_7_0*x + 0.843171097702002*sh_7_12*z + 0.661437827766147*sh_7_13*y - 0.088388347648319*sh_7_14*z - 0.843171097702002*sh_7_2*x
200
+ sh_8_15 = -0.90571104663684*sh_7_1*x + 0.90571104663684*sh_7_13*z + 0.484122918275927*sh_7_14*y
201
+ sh_8_16 = -0.968245836551853*sh_7_0*x + 0.968245836551855*sh_7_14*z
202
+ if lmax == 8:
203
+ return torch.stack([
204
+ sh_0_0,
205
+ sh_1_0, sh_1_1, sh_1_2,
206
+ sh_2_0, sh_2_1, sh_2_2, sh_2_3, sh_2_4,
207
+ sh_3_0, sh_3_1, sh_3_2, sh_3_3, sh_3_4, sh_3_5, sh_3_6,
208
+ sh_4_0, sh_4_1, sh_4_2, sh_4_3, sh_4_4, sh_4_5, sh_4_6, sh_4_7, sh_4_8,
209
+ sh_5_0, sh_5_1, sh_5_2, sh_5_3, sh_5_4, sh_5_5, sh_5_6, sh_5_7, sh_5_8, sh_5_9, sh_5_10,
210
+ sh_6_0, sh_6_1, sh_6_2, sh_6_3, sh_6_4, sh_6_5, sh_6_6, sh_6_7, sh_6_8, sh_6_9, sh_6_10, sh_6_11, sh_6_12,
211
+ sh_7_0, sh_7_1, sh_7_2, sh_7_3, sh_7_4, sh_7_5, sh_7_6, sh_7_7, sh_7_8, sh_7_9, sh_7_10, sh_7_11, sh_7_12, sh_7_13, sh_7_14,
212
+ sh_8_0, sh_8_1, sh_8_2, sh_8_3, sh_8_4, sh_8_5, sh_8_6, sh_8_7, sh_8_8, sh_8_9, sh_8_10, sh_8_11, sh_8_12, sh_8_13, sh_8_14, sh_8_15, sh_8_16
213
+ ], dim=-1)
214
+
215
+ sh_9_0 = 0.97182531580755*sh_8_0*z + 0.971825315807551*sh_8_16*x
216
+ sh_9_1 = 0.458122847290851*sh_8_0*y + 0.916245694581702*sh_8_1*z + 0.916245694581702*sh_8_15*x
217
+ sh_9_2 = -0.078567420131839*sh_8_0*z + 0.62853936105471*sh_8_1*y + 0.86066296582387*sh_8_14*x + 0.0785674201318385*sh_8_16*x + 0.860662965823871*sh_8_2*z
218
+ sh_9_3 = -0.136082763487955*sh_8_1*z + 0.805076485899413*sh_8_13*x + 0.136082763487954*sh_8_15*x + 0.74535599249993*sh_8_2*y + 0.805076485899413*sh_8_3*z
219
+ sh_9_4 = 0.749485420179558*sh_8_12*x + 0.192450089729875*sh_8_14*x - 0.192450089729876*sh_8_2*z + 0.831479419283099*sh_8_3*y + 0.749485420179558*sh_8_4*z
220
+ sh_9_5 = 0.693888666488711*sh_8_11*x + 0.248451997499977*sh_8_13*x - 0.248451997499976*sh_8_3*z + 0.895806416477617*sh_8_4*y + 0.69388866648871*sh_8_5*z
221
+ sh_9_6 = 0.638284738504225*sh_8_10*x + 0.304290309725092*sh_8_12*x - 0.304290309725092*sh_8_4*z + 0.942809041582063*sh_8_5*y + 0.638284738504225*sh_8_6*z
222
+ sh_9_7 = 0.360041149911548*sh_8_11*x - 0.360041149911548*sh_8_5*z + 0.974996043043569*sh_8_6*y + 0.582671582316751*sh_8_7*z + 0.582671582316751*sh_8_9*x
223
+ sh_9_8 = 0.415739709641549*sh_8_10*x - 0.415739709641549*sh_8_6*z + 0.993807989999906*sh_8_7*y + 0.74535599249993*sh_8_8*x
224
+ sh_9_9 = -0.66666666666666666667*sh_8_7*x + sh_8_8*y - 0.66666666666666666667*sh_8_9*z
225
+ sh_9_10 = -0.415739709641549*sh_8_10*z - 0.415739709641549*sh_8_6*x + 0.74535599249993*sh_8_8*z + 0.993807989999906*sh_8_9*y
226
+ sh_9_11 = 0.974996043043568*sh_8_10*y - 0.360041149911547*sh_8_11*z - 0.360041149911548*sh_8_5*x - 0.582671582316751*sh_8_7*x + 0.582671582316751*sh_8_9*z
227
+ sh_9_12 = 0.638284738504225*sh_8_10*z + 0.942809041582063*sh_8_11*y - 0.304290309725092*sh_8_12*z - 0.304290309725092*sh_8_4*x - 0.638284738504225*sh_8_6*x
228
+ sh_9_13 = 0.693888666488711*sh_8_11*z + 0.895806416477617*sh_8_12*y - 0.248451997499977*sh_8_13*z - 0.248451997499977*sh_8_3*x - 0.693888666488711*sh_8_5*x
229
+ sh_9_14 = 0.749485420179558*sh_8_12*z + 0.831479419283098*sh_8_13*y - 0.192450089729875*sh_8_14*z - 0.192450089729875*sh_8_2*x - 0.749485420179558*sh_8_4*x
230
+ sh_9_15 = -0.136082763487954*sh_8_1*x + 0.805076485899413*sh_8_13*z + 0.745355992499929*sh_8_14*y - 0.136082763487955*sh_8_15*z - 0.805076485899413*sh_8_3*x
231
+ sh_9_16 = -0.0785674201318389*sh_8_0*x + 0.86066296582387*sh_8_14*z + 0.628539361054709*sh_8_15*y - 0.0785674201318387*sh_8_16*z - 0.860662965823871*sh_8_2*x
232
+ sh_9_17 = -0.9162456945817*sh_8_1*x + 0.916245694581702*sh_8_15*z + 0.458122847290851*sh_8_16*y
233
+ sh_9_18 = -0.97182531580755*sh_8_0*x + 0.97182531580755*sh_8_16*z
234
+ if lmax == 9:
235
+ return torch.stack([
236
+ sh_0_0,
237
+ sh_1_0, sh_1_1, sh_1_2,
238
+ sh_2_0, sh_2_1, sh_2_2, sh_2_3, sh_2_4,
239
+ sh_3_0, sh_3_1, sh_3_2, sh_3_3, sh_3_4, sh_3_5, sh_3_6,
240
+ sh_4_0, sh_4_1, sh_4_2, sh_4_3, sh_4_4, sh_4_5, sh_4_6, sh_4_7, sh_4_8,
241
+ sh_5_0, sh_5_1, sh_5_2, sh_5_3, sh_5_4, sh_5_5, sh_5_6, sh_5_7, sh_5_8, sh_5_9, sh_5_10,
242
+ sh_6_0, sh_6_1, sh_6_2, sh_6_3, sh_6_4, sh_6_5, sh_6_6, sh_6_7, sh_6_8, sh_6_9, sh_6_10, sh_6_11, sh_6_12,
243
+ sh_7_0, sh_7_1, sh_7_2, sh_7_3, sh_7_4, sh_7_5, sh_7_6, sh_7_7, sh_7_8, sh_7_9, sh_7_10, sh_7_11, sh_7_12, sh_7_13, sh_7_14,
244
+ sh_8_0, sh_8_1, sh_8_2, sh_8_3, sh_8_4, sh_8_5, sh_8_6, sh_8_7, sh_8_8, sh_8_9, sh_8_10, sh_8_11, sh_8_12, sh_8_13, sh_8_14, sh_8_15, sh_8_16,
245
+ sh_9_0, sh_9_1, sh_9_2, sh_9_3, sh_9_4, sh_9_5, sh_9_6, sh_9_7, sh_9_8, sh_9_9, sh_9_10, sh_9_11, sh_9_12, sh_9_13, sh_9_14, sh_9_15, sh_9_16, sh_9_17, sh_9_18
246
+ ], dim=-1)
247
+
248
+ sh_10_0 = 0.974679434480897*sh_9_0*z + 0.974679434480897*sh_9_18*x
249
+ sh_10_1 = 0.435889894354067*sh_9_0*y + 0.924662100445347*sh_9_1*z + 0.924662100445347*sh_9_17*x
250
+ sh_10_2 = -0.0707106781186546*sh_9_0*z + 0.6*sh_9_1*y + 0.874642784226796*sh_9_16*x + 0.070710678118655*sh_9_18*x + 0.874642784226795*sh_9_2*z
251
+ sh_10_3 = -0.122474487139159*sh_9_1*z + 0.824621125123533*sh_9_15*x + 0.122474487139159*sh_9_17*x + 0.714142842854285*sh_9_2*y + 0.824621125123533*sh_9_3*z
252
+ sh_10_4 = 0.774596669241484*sh_9_14*x + 0.173205080756887*sh_9_16*x - 0.173205080756888*sh_9_2*z + 0.8*sh_9_3*y + 0.774596669241483*sh_9_4*z
253
+ sh_10_5 = 0.724568837309472*sh_9_13*x + 0.223606797749979*sh_9_15*x - 0.223606797749979*sh_9_3*z + 0.866025403784438*sh_9_4*y + 0.724568837309472*sh_9_5*z
254
+ sh_10_6 = 0.674536878161602*sh_9_12*x + 0.273861278752583*sh_9_14*x - 0.273861278752583*sh_9_4*z + 0.916515138991168*sh_9_5*y + 0.674536878161602*sh_9_6*z
255
+ sh_10_7 = 0.62449979983984*sh_9_11*x + 0.324037034920393*sh_9_13*x - 0.324037034920393*sh_9_5*z + 0.953939201416946*sh_9_6*y + 0.62449979983984*sh_9_7*z
256
+ sh_10_8 = 0.574456264653803*sh_9_10*x + 0.374165738677394*sh_9_12*x - 0.374165738677394*sh_9_6*z + 0.979795897113272*sh_9_7*y + 0.574456264653803*sh_9_8*z
257
+ sh_10_9 = 0.424264068711928*sh_9_11*x - 0.424264068711929*sh_9_7*z + 0.99498743710662*sh_9_8*y + 0.741619848709567*sh_9_9*x
258
+ sh_10_10 = -0.670820393249937*sh_9_10*z - 0.670820393249937*sh_9_8*x + sh_9_9*y
259
+ sh_10_11 = 0.99498743710662*sh_9_10*y - 0.424264068711929*sh_9_11*z - 0.424264068711929*sh_9_7*x + 0.741619848709567*sh_9_9*z
260
+ sh_10_12 = 0.574456264653803*sh_9_10*z + 0.979795897113272*sh_9_11*y - 0.374165738677395*sh_9_12*z - 0.374165738677394*sh_9_6*x - 0.574456264653803*sh_9_8*x
261
+ sh_10_13 = 0.62449979983984*sh_9_11*z + 0.953939201416946*sh_9_12*y - 0.324037034920393*sh_9_13*z - 0.324037034920393*sh_9_5*x - 0.62449979983984*sh_9_7*x
262
+ sh_10_14 = 0.674536878161602*sh_9_12*z + 0.916515138991168*sh_9_13*y - 0.273861278752583*sh_9_14*z - 0.273861278752583*sh_9_4*x - 0.674536878161603*sh_9_6*x
263
+ sh_10_15 = 0.724568837309472*sh_9_13*z + 0.866025403784439*sh_9_14*y - 0.223606797749979*sh_9_15*z - 0.223606797749979*sh_9_3*x - 0.724568837309472*sh_9_5*x
264
+ sh_10_16 = 0.774596669241484*sh_9_14*z + 0.8*sh_9_15*y - 0.173205080756888*sh_9_16*z - 0.173205080756887*sh_9_2*x - 0.774596669241484*sh_9_4*x
265
+ sh_10_17 = -0.12247448713916*sh_9_1*x + 0.824621125123532*sh_9_15*z + 0.714142842854285*sh_9_16*y - 0.122474487139158*sh_9_17*z - 0.824621125123533*sh_9_3*x
266
+ sh_10_18 = -0.0707106781186548*sh_9_0*x + 0.874642784226796*sh_9_16*z + 0.6*sh_9_17*y - 0.0707106781186546*sh_9_18*z - 0.874642784226796*sh_9_2*x
267
+ sh_10_19 = -0.924662100445348*sh_9_1*x + 0.924662100445347*sh_9_17*z + 0.435889894354068*sh_9_18*y
268
+ sh_10_20 = -0.974679434480898*sh_9_0*x + 0.974679434480896*sh_9_18*z
269
+ if lmax == 10:
270
+ return torch.stack([
271
+ sh_0_0,
272
+ sh_1_0, sh_1_1, sh_1_2,
273
+ sh_2_0, sh_2_1, sh_2_2, sh_2_3, sh_2_4,
274
+ sh_3_0, sh_3_1, sh_3_2, sh_3_3, sh_3_4, sh_3_5, sh_3_6,
275
+ sh_4_0, sh_4_1, sh_4_2, sh_4_3, sh_4_4, sh_4_5, sh_4_6, sh_4_7, sh_4_8,
276
+ sh_5_0, sh_5_1, sh_5_2, sh_5_3, sh_5_4, sh_5_5, sh_5_6, sh_5_7, sh_5_8, sh_5_9, sh_5_10,
277
+ sh_6_0, sh_6_1, sh_6_2, sh_6_3, sh_6_4, sh_6_5, sh_6_6, sh_6_7, sh_6_8, sh_6_9, sh_6_10, sh_6_11, sh_6_12,
278
+ sh_7_0, sh_7_1, sh_7_2, sh_7_3, sh_7_4, sh_7_5, sh_7_6, sh_7_7, sh_7_8, sh_7_9, sh_7_10, sh_7_11, sh_7_12, sh_7_13, sh_7_14,
279
+ sh_8_0, sh_8_1, sh_8_2, sh_8_3, sh_8_4, sh_8_5, sh_8_6, sh_8_7, sh_8_8, sh_8_9, sh_8_10, sh_8_11, sh_8_12, sh_8_13, sh_8_14, sh_8_15, sh_8_16,
280
+ sh_9_0, sh_9_1, sh_9_2, sh_9_3, sh_9_4, sh_9_5, sh_9_6, sh_9_7, sh_9_8, sh_9_9, sh_9_10, sh_9_11, sh_9_12, sh_9_13, sh_9_14, sh_9_15, sh_9_16, sh_9_17, sh_9_18,
281
+ sh_10_0, sh_10_1, sh_10_2, sh_10_3, sh_10_4, sh_10_5, sh_10_6, sh_10_7, sh_10_8, sh_10_9, sh_10_10, sh_10_11, sh_10_12, sh_10_13, sh_10_14, sh_10_15, sh_10_16, sh_10_17, sh_10_18, sh_10_19, sh_10_20
282
+ ], dim=-1)
283
+
284
+ sh_11_0 = 0.977008420918394*sh_10_0*z + 0.977008420918394*sh_10_20*x
285
+ sh_11_1 = 0.416597790450531*sh_10_0*y + 0.9315409787236*sh_10_1*z + 0.931540978723599*sh_10_19*x
286
+ sh_11_2 = -0.0642824346533223*sh_10_0*z + 0.574959574576069*sh_10_1*y + 0.88607221316445*sh_10_18*x + 0.886072213164452*sh_10_2*z + 0.0642824346533226*sh_10_20*x
287
+ sh_11_3 = -0.111340442853781*sh_10_1*z + 0.84060190949577*sh_10_17*x + 0.111340442853781*sh_10_19*x + 0.686348585024614*sh_10_2*y + 0.840601909495769*sh_10_3*z
288
+ sh_11_4 = 0.795129803842541*sh_10_16*x + 0.157459164324444*sh_10_18*x - 0.157459164324443*sh_10_2*z + 0.771389215839871*sh_10_3*y + 0.795129803842541*sh_10_4*z
289
+ sh_11_5 = 0.74965556829412*sh_10_15*x + 0.203278907045435*sh_10_17*x - 0.203278907045436*sh_10_3*z + 0.838140405208444*sh_10_4*y + 0.74965556829412*sh_10_5*z
290
+ sh_11_6 = 0.70417879021953*sh_10_14*x + 0.248964798865985*sh_10_16*x - 0.248964798865985*sh_10_4*z + 0.890723542830247*sh_10_5*y + 0.704178790219531*sh_10_6*z
291
+ sh_11_7 = 0.658698943008611*sh_10_13*x + 0.294579122654903*sh_10_15*x - 0.294579122654903*sh_10_5*z + 0.9315409787236*sh_10_6*y + 0.658698943008611*sh_10_7*z
292
+ sh_11_8 = 0.613215343783275*sh_10_12*x + 0.340150671524904*sh_10_14*x - 0.340150671524904*sh_10_6*z + 0.962091385841669*sh_10_7*y + 0.613215343783274*sh_10_8*z
293
+ sh_11_9 = 0.567727090763491*sh_10_11*x + 0.385694607919935*sh_10_13*x - 0.385694607919935*sh_10_7*z + 0.983332166035633*sh_10_8*y + 0.56772709076349*sh_10_9*z
294
+ sh_11_10 = 0.738548945875997*sh_10_10*x + 0.431219680932052*sh_10_12*x - 0.431219680932052*sh_10_8*z + 0.995859195463938*sh_10_9*y
295
+ sh_11_11 = sh_10_10*y - 0.674199862463242*sh_10_11*z - 0.674199862463243*sh_10_9*x
296
+ sh_11_12 = 0.738548945875996*sh_10_10*z + 0.995859195463939*sh_10_11*y - 0.431219680932052*sh_10_12*z - 0.431219680932053*sh_10_8*x
297
+ sh_11_13 = 0.567727090763491*sh_10_11*z + 0.983332166035634*sh_10_12*y - 0.385694607919935*sh_10_13*z - 0.385694607919935*sh_10_7*x - 0.567727090763491*sh_10_9*x
298
+ sh_11_14 = 0.613215343783275*sh_10_12*z + 0.96209138584167*sh_10_13*y - 0.340150671524904*sh_10_14*z - 0.340150671524904*sh_10_6*x - 0.613215343783274*sh_10_8*x
299
+ sh_11_15 = 0.658698943008611*sh_10_13*z + 0.9315409787236*sh_10_14*y - 0.294579122654903*sh_10_15*z - 0.294579122654903*sh_10_5*x - 0.65869894300861*sh_10_7*x
300
+ sh_11_16 = 0.70417879021953*sh_10_14*z + 0.890723542830246*sh_10_15*y - 0.248964798865985*sh_10_16*z - 0.248964798865985*sh_10_4*x - 0.70417879021953*sh_10_6*x
301
+ sh_11_17 = 0.749655568294121*sh_10_15*z + 0.838140405208444*sh_10_16*y - 0.203278907045436*sh_10_17*z - 0.203278907045435*sh_10_3*x - 0.749655568294119*sh_10_5*x
302
+ sh_11_18 = 0.79512980384254*sh_10_16*z + 0.77138921583987*sh_10_17*y - 0.157459164324443*sh_10_18*z - 0.157459164324444*sh_10_2*x - 0.795129803842541*sh_10_4*x
303
+ sh_11_19 = -0.111340442853782*sh_10_1*x + 0.84060190949577*sh_10_17*z + 0.686348585024614*sh_10_18*y - 0.111340442853781*sh_10_19*z - 0.840601909495769*sh_10_3*x
304
+ sh_11_20 = -0.0642824346533226*sh_10_0*x + 0.886072213164451*sh_10_18*z + 0.57495957457607*sh_10_19*y - 0.886072213164451*sh_10_2*x - 0.0642824346533228*sh_10_20*z
305
+ sh_11_21 = -0.9315409787236*sh_10_1*x + 0.931540978723599*sh_10_19*z + 0.416597790450531*sh_10_20*y
306
+ sh_11_22 = -0.977008420918393*sh_10_0*x + 0.977008420918393*sh_10_20*z
307
+ return torch.stack([
308
+ sh_0_0,
309
+ sh_1_0, sh_1_1, sh_1_2,
310
+ sh_2_0, sh_2_1, sh_2_2, sh_2_3, sh_2_4,
311
+ sh_3_0, sh_3_1, sh_3_2, sh_3_3, sh_3_4, sh_3_5, sh_3_6,
312
+ sh_4_0, sh_4_1, sh_4_2, sh_4_3, sh_4_4, sh_4_5, sh_4_6, sh_4_7, sh_4_8,
313
+ sh_5_0, sh_5_1, sh_5_2, sh_5_3, sh_5_4, sh_5_5, sh_5_6, sh_5_7, sh_5_8, sh_5_9, sh_5_10,
314
+ sh_6_0, sh_6_1, sh_6_2, sh_6_3, sh_6_4, sh_6_5, sh_6_6, sh_6_7, sh_6_8, sh_6_9, sh_6_10, sh_6_11, sh_6_12,
315
+ sh_7_0, sh_7_1, sh_7_2, sh_7_3, sh_7_4, sh_7_5, sh_7_6, sh_7_7, sh_7_8, sh_7_9, sh_7_10, sh_7_11, sh_7_12, sh_7_13, sh_7_14,
316
+ sh_8_0, sh_8_1, sh_8_2, sh_8_3, sh_8_4, sh_8_5, sh_8_6, sh_8_7, sh_8_8, sh_8_9, sh_8_10, sh_8_11, sh_8_12, sh_8_13, sh_8_14, sh_8_15, sh_8_16,
317
+ sh_9_0, sh_9_1, sh_9_2, sh_9_3, sh_9_4, sh_9_5, sh_9_6, sh_9_7, sh_9_8, sh_9_9, sh_9_10, sh_9_11, sh_9_12, sh_9_13, sh_9_14, sh_9_15, sh_9_16, sh_9_17, sh_9_18,
318
+ sh_10_0, sh_10_1, sh_10_2, sh_10_3, sh_10_4, sh_10_5, sh_10_6, sh_10_7, sh_10_8, sh_10_9, sh_10_10, sh_10_11, sh_10_12, sh_10_13, sh_10_14, sh_10_15, sh_10_16, sh_10_17, sh_10_18, sh_10_19, sh_10_20,
319
+ sh_11_0, sh_11_1, sh_11_2, sh_11_3, sh_11_4, sh_11_5, sh_11_6, sh_11_7, sh_11_8, sh_11_9, sh_11_10, sh_11_11, sh_11_12, sh_11_13, sh_11_14, sh_11_15, sh_11_16, sh_11_17, sh_11_18, sh_11_19, sh_11_20, sh_11_21, sh_11_22
320
+ ], dim=-1)
321
+
322
+
323
+ def collate_fn(graph_list):
324
+ return Collater(if_lcmp=True)(graph_list)
325
+
326
+
327
+ class Collater:
328
+ def __init__(self, if_lcmp):
329
+ self.if_lcmp = if_lcmp
330
+ self.flag_pyg2 = (torch_geometric.__version__[0] == '2')
331
+
332
+ def __call__(self, graph_list):
333
+ if self.if_lcmp:
334
+ flag_dict = hasattr(graph_list[0], 'subgraph_dict')
335
+ if self.flag_pyg2:
336
+ assert flag_dict, 'Please generate the graph file with the current version of PyG'
337
+ batch = Batch.from_data_list(graph_list)
338
+
339
+ subgraph_atom_idx_batch = []
340
+ subgraph_edge_idx_batch = []
341
+ subgraph_edge_ang_batch = []
342
+ subgraph_index_batch = []
343
+ if flag_dict:
344
+ for index_batch in range(len(graph_list)):
345
+ (subgraph_atom_idx, subgraph_edge_idx, subgraph_edge_ang,
346
+ subgraph_index) = graph_list[index_batch].subgraph_dict.values()
347
+ if self.flag_pyg2:
348
+ subgraph_atom_idx_batch.append(subgraph_atom_idx + batch._slice_dict['x'][index_batch])
349
+ subgraph_edge_idx_batch.append(subgraph_edge_idx + batch._slice_dict['edge_attr'][index_batch])
350
+ subgraph_index_batch.append(subgraph_index + batch._slice_dict['edge_attr'][index_batch] * 2)
351
+ else:
352
+ subgraph_atom_idx_batch.append(subgraph_atom_idx + batch.__slices__['x'][index_batch])
353
+ subgraph_edge_idx_batch.append(subgraph_edge_idx + batch.__slices__['edge_attr'][index_batch])
354
+ subgraph_index_batch.append(subgraph_index + batch.__slices__['edge_attr'][index_batch] * 2)
355
+ subgraph_edge_ang_batch.append(subgraph_edge_ang)
356
+ else:
357
+ for index_batch, (subgraph_atom_idx, subgraph_edge_idx,
358
+ subgraph_edge_ang, subgraph_index) in enumerate(batch.subgraph):
359
+ subgraph_atom_idx_batch.append(subgraph_atom_idx + batch.__slices__['x'][index_batch])
360
+ subgraph_edge_idx_batch.append(subgraph_edge_idx + batch.__slices__['edge_attr'][index_batch])
361
+ subgraph_edge_ang_batch.append(subgraph_edge_ang)
362
+ subgraph_index_batch.append(subgraph_index + batch.__slices__['edge_attr'][index_batch] * 2)
363
+
364
+ subgraph_atom_idx_batch = torch.cat(subgraph_atom_idx_batch, dim=0)
365
+ subgraph_edge_idx_batch = torch.cat(subgraph_edge_idx_batch, dim=0)
366
+ subgraph_edge_ang_batch = torch.cat(subgraph_edge_ang_batch, dim=0)
367
+ subgraph_index_batch = torch.cat(subgraph_index_batch, dim=0)
368
+
369
+ subgraph = (subgraph_atom_idx_batch, subgraph_edge_idx_batch, subgraph_edge_ang_batch, subgraph_index_batch)
370
+
371
+ return batch, subgraph
372
+ else:
373
+ return Batch.from_data_list(graph_list)
374
+
375
+
376
+ def load_orbital_types(path, return_orbital_types=False):
377
+ orbital_types = []
378
+ with open(path) as f:
379
+ line = f.readline()
380
+ while line:
381
+ orbital_types.append(list(map(int, line.split())))
382
+ line = f.readline()
383
+ atom_num_orbital = [sum(map(lambda x: 2 * x + 1,atom_orbital_types)) for atom_orbital_types in orbital_types]
384
+ if return_orbital_types:
385
+ return atom_num_orbital, orbital_types
386
+ else:
387
+ return atom_num_orbital
388
+
389
+
390
+ """
391
+ The function get_graph below is extended from "https://github.com/materialsproject/pymatgen", which has the MIT License below
392
+
393
+ ---------------------------------------------------------------------------
394
+ The MIT License (MIT)
395
+ Copyright (c) 2011-2012 MIT & The Regents of the University of California, through Lawrence Berkeley National Laboratory
396
+
397
+ Permission is hereby granted, free of charge, to any person obtaining a copy of
398
+ this software and associated documentation files (the "Software"), to deal in
399
+ the Software without restriction, including without limitation the rights to
400
+ use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
401
+ the Software, and to permit persons to whom the Software is furnished to do so,
402
+ subject to the following conditions:
403
+
404
+ The above copyright notice and this permission notice shall be included in all
405
+ copies or substantial portions of the Software.
406
+
407
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
408
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
409
+ FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
410
+ COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
411
+ IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
412
+ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
413
+ """
414
+ def get_graph(cart_coords, frac_coords, numbers, stru_id, r, max_num_nbr, numerical_tol, lattice,
415
+ default_dtype_torch, tb_folder, interface, num_l, create_from_DFT, if_lcmp_graph,
416
+ separate_onsite, target='hamiltonian', huge_structure=False, only_get_R_list=False, if_new_sp=False,
417
+ if_require_grad=False, fid_rc=None, **kwargs):
418
+ assert target in ['hamiltonian', 'phiVdphi', 'density_matrix', 'O_ij', 'E_ij', 'E_i']
419
+ if target == 'density_matrix' or target == 'O_ij':
420
+ assert interface == 'h5' or interface == 'h5_rc_only'
421
+ if target == 'E_ij':
422
+ assert interface == 'h5'
423
+ assert create_from_DFT is True
424
+ assert separate_onsite is True
425
+ if target == 'E_i':
426
+ assert interface == 'h5'
427
+ assert if_lcmp_graph is False
428
+ assert separate_onsite is True
429
+ if create_from_DFT:
430
+ assert tb_folder is not None
431
+ assert max_num_nbr == 0
432
+ if interface == 'h5_rc_only' and target == 'E_ij':
433
+ raise NotImplementedError
434
+ elif interface == 'h5' or (interface == 'h5_rc_only' and target != 'E_ij'):
435
+ key_atom_list = [[] for _ in range(len(numbers))]
436
+ edge_idx, edge_fea, edge_idx_first = [], [], []
437
+ if if_lcmp_graph:
438
+ atom_idx_connect, edge_idx_connect = [], []
439
+ edge_idx_connect_cursor = 0
440
+ if target == 'E_ij':
441
+ fid = h5py.File(os.path.join(tb_folder, 'E_delta_ee_ij.h5'), 'r')
442
+ else:
443
+ if if_require_grad:
444
+ fid = fid_rc
445
+ else:
446
+ fid = h5py.File(os.path.join(tb_folder, 'rc.h5'), 'r')
447
+ for k in fid.keys():
448
+ key = json.loads(k)
449
+ key_tensor = torch.tensor([key[0], key[1], key[2], key[3] - 1, key[4] - 1]) # (R, i, j) i and j is 0-based index
450
+ if separate_onsite:
451
+ if key[0] == 0 and key[1] == 0 and key[2] == 0 and key[3] == key[4]:
452
+ continue
453
+ key_atom_list[key[3] - 1].append(key_tensor)
454
+ if target != 'E_ij' and not if_require_grad:
455
+ fid.close()
456
+
457
+ for index_first, (cart_coord, keys_tensor) in enumerate(zip(cart_coords, key_atom_list)):
458
+ keys_tensor = torch.stack(keys_tensor)
459
+ cart_coords_j = cart_coords[keys_tensor[:, 4]] + keys_tensor[:, :3].type(default_dtype_torch).to(cart_coords.device) @ lattice.to(cart_coords.device)
460
+ dist = torch.norm(cart_coords_j - cart_coord[None, :], dim=1)
461
+ len_nn = keys_tensor.shape[0]
462
+ edge_idx_first.extend([index_first] * len_nn)
463
+ edge_idx.extend(keys_tensor[:, 4].tolist())
464
+
465
+ edge_fea_single = torch.cat([dist.view(-1, 1), cart_coord.view(1, 3).expand(len_nn, 3)], dim=-1)
466
+ edge_fea_single = torch.cat([edge_fea_single, cart_coords_j, cart_coords[keys_tensor[:, 4]]], dim=-1)
467
+ edge_fea.append(edge_fea_single)
468
+
469
+ if if_lcmp_graph:
470
+ atom_idx_connect.append(keys_tensor[:, 4])
471
+ edge_idx_connect.append(range(edge_idx_connect_cursor, edge_idx_connect_cursor + len_nn))
472
+ edge_idx_connect_cursor += len_nn
473
+
474
+ edge_fea = torch.cat(edge_fea).type(default_dtype_torch)
475
+ edge_idx = torch.stack([torch.LongTensor(edge_idx_first), torch.LongTensor(edge_idx)])
476
+ else:
477
+ raise NotImplemented
478
+ else:
479
+ cart_coords_np = cart_coords.detach().numpy()
480
+ frac_coords_np = frac_coords.detach().numpy()
481
+ lattice_np = lattice.detach().numpy()
482
+ num_atom = cart_coords.shape[0]
483
+
484
+ center_coords_min = np.min(cart_coords_np, axis=0)
485
+ center_coords_max = np.max(cart_coords_np, axis=0)
486
+ global_min = center_coords_min - r - numerical_tol
487
+ global_max = center_coords_max + r + numerical_tol
488
+ global_min_torch = torch.tensor(global_min)
489
+ global_max_torch = torch.tensor(global_max)
490
+
491
+ reciprocal_lattice = np.linalg.inv(lattice_np).T * 2 * np.pi
492
+ recp_len = np.sqrt(np.sum(reciprocal_lattice ** 2, axis=1))
493
+ maxr = np.ceil((r + 0.15) * recp_len / (2 * np.pi))
494
+ nmin = np.floor(np.min(frac_coords_np, axis=0)) - maxr
495
+ nmax = np.ceil(np.max(frac_coords_np, axis=0)) + maxr
496
+ all_ranges = [np.arange(x, y, dtype='int64') for x, y in zip(nmin, nmax)]
497
+ images = torch.tensor(list(itertools.product(*all_ranges))).type_as(lattice)
498
+
499
+ if only_get_R_list:
500
+ return images
501
+
502
+ coords = (images @ lattice)[:, None, :] + cart_coords[None, :, :]
503
+ indices = torch.arange(num_atom).unsqueeze(0).expand(images.shape[0], num_atom)
504
+ valid_index_bool = coords.gt(global_min_torch) * coords.lt(global_max_torch)
505
+ valid_index_bool = valid_index_bool.all(dim=-1)
506
+ valid_coords = coords[valid_index_bool]
507
+ valid_indices = indices[valid_index_bool]
508
+
509
+
510
+ valid_coords_np = valid_coords.detach().numpy()
511
+ all_cube_index = _compute_cube_index(valid_coords_np, global_min, r)
512
+ nx, ny, nz = _compute_cube_index(global_max, global_min, r) + 1
513
+ all_cube_index = _three_to_one(all_cube_index, ny, nz)
514
+ site_cube_index = _three_to_one(_compute_cube_index(cart_coords_np, global_min, r), ny, nz)
515
+ cube_to_coords_index = collections.defaultdict(list) # type: Dict[int, List]
516
+
517
+ for index, cart_coord in enumerate(all_cube_index.ravel()):
518
+ cube_to_coords_index[cart_coord].append(index)
519
+
520
+ site_neighbors = find_neighbors(site_cube_index, nx, ny, nz)
521
+
522
+ edge_idx, edge_fea, edge_idx_first = [], [], []
523
+ if if_lcmp_graph:
524
+ atom_idx_connect, edge_idx_connect = [], []
525
+ edge_idx_connect_cursor = 0
526
+ for index_first, (cart_coord, j) in enumerate(zip(cart_coords, site_neighbors)):
527
+ l1 = np.array(_three_to_one(j, ny, nz), dtype=int).ravel()
528
+ ks = [k for k in l1 if k in cube_to_coords_index]
529
+ nn_coords_index = np.concatenate([cube_to_coords_index[k] for k in ks], axis=0)
530
+ nn_coords = valid_coords[nn_coords_index]
531
+ nn_indices = valid_indices[nn_coords_index]
532
+ dist = torch.norm(nn_coords - cart_coord[None, :], dim=1)
533
+
534
+ if separate_onsite is False:
535
+ nn_coords = nn_coords.squeeze()
536
+ nn_indices = nn_indices.squeeze()
537
+ dist = dist.squeeze()
538
+ else:
539
+ nonzero_index = dist.nonzero(as_tuple=False)
540
+ nn_coords = nn_coords[nonzero_index]
541
+ nn_coords = nn_coords.squeeze(1)
542
+ nn_indices = nn_indices[nonzero_index].view(-1)
543
+ dist = dist[nonzero_index].view(-1)
544
+
545
+ if max_num_nbr > 0:
546
+ if len(dist) >= max_num_nbr:
547
+ dist_top, index_top = dist.topk(max_num_nbr, largest=False, sorted=True)
548
+ edge_idx.extend(nn_indices[index_top])
549
+ if if_lcmp_graph:
550
+ atom_idx_connect.append(nn_indices[index_top])
551
+ edge_idx_first.extend([index_first] * len(index_top))
552
+ edge_fea_single = torch.cat([dist_top.view(-1, 1), cart_coord.view(1, 3).expand(len(index_top), 3)], dim=-1)
553
+ edge_fea_single = torch.cat([edge_fea_single, nn_coords[index_top], cart_coords[nn_indices[index_top]]], dim=-1)
554
+ edge_fea.append(edge_fea_single)
555
+ else:
556
+ warnings.warn("Can not find a number of max_num_nbr atoms within radius")
557
+ edge_idx.extend(nn_indices)
558
+ if if_lcmp_graph:
559
+ atom_idx_connect.append(nn_indices)
560
+ edge_idx_first.extend([index_first] * len(nn_indices))
561
+ edge_fea_single = torch.cat([dist.view(-1, 1), cart_coord.view(1, 3).expand(len(nn_indices), 3)], dim=-1)
562
+ edge_fea_single = torch.cat([edge_fea_single, nn_coords, cart_coords[nn_indices]], dim=-1)
563
+ edge_fea.append(edge_fea_single)
564
+ else:
565
+ index_top = dist.lt(r + numerical_tol)
566
+ edge_idx.extend(nn_indices[index_top])
567
+ if if_lcmp_graph:
568
+ atom_idx_connect.append(nn_indices[index_top])
569
+ edge_idx_first.extend([index_first] * len(nn_indices[index_top]))
570
+ edge_fea_single = torch.cat([dist[index_top].view(-1, 1), cart_coord.view(1, 3).expand(len(nn_indices[index_top]), 3)], dim=-1)
571
+ edge_fea_single = torch.cat([edge_fea_single, nn_coords[index_top], cart_coords[nn_indices[index_top]]], dim=-1)
572
+ edge_fea.append(edge_fea_single)
573
+ if if_lcmp_graph:
574
+ edge_idx_connect.append(range(edge_idx_connect_cursor, edge_idx_connect_cursor + len(atom_idx_connect[-1])))
575
+ edge_idx_connect_cursor += len(atom_idx_connect[-1])
576
+
577
+
578
+ edge_fea = torch.cat(edge_fea).type(default_dtype_torch)
579
+ edge_idx_first = torch.LongTensor(edge_idx_first)
580
+ edge_idx = torch.stack([edge_idx_first, torch.LongTensor(edge_idx)])
581
+
582
+
583
+ if tb_folder is not None:
584
+ if target == 'E_ij':
585
+ read_file_list = ['E_ij.h5', 'E_delta_ee_ij.h5', 'E_xc_ij.h5']
586
+ graph_key_list = ['E_ij', 'E_delta_ee_ij', 'E_xc_ij']
587
+ read_terms_dict = {}
588
+ for read_file, graph_key in zip(read_file_list, graph_key_list):
589
+ read_terms = {}
590
+ fid = h5py.File(os.path.join(tb_folder, read_file), 'r')
591
+ for k, v in fid.items():
592
+ key = json.loads(k)
593
+ key = (key[0], key[1], key[2], key[3] - 1, key[4] - 1)
594
+ read_terms[key] = torch.tensor(v[...], dtype=default_dtype_torch)
595
+ read_terms_dict[graph_key] = read_terms
596
+ fid.close()
597
+
598
+ local_rotation_dict = {}
599
+ if if_require_grad:
600
+ fid = fid_rc
601
+ else:
602
+ fid = h5py.File(os.path.join(tb_folder, 'rc.h5'), 'r')
603
+ for k, v in fid.items():
604
+ key = json.loads(k)
605
+ key = (key[0], key[1], key[2], key[3] - 1, key[4] - 1) # (R, i, j) i and j is 0-based index
606
+ if if_require_grad:
607
+ local_rotation_dict[key] = v
608
+ else:
609
+ local_rotation_dict[key] = torch.tensor(v, dtype=default_dtype_torch)
610
+ if not if_require_grad:
611
+ fid.close()
612
+ elif target == 'E_i':
613
+ read_file_list = ['E_i.h5']
614
+ graph_key_list = ['E_i']
615
+ read_terms_dict = {}
616
+ for read_file, graph_key in zip(read_file_list, graph_key_list):
617
+ read_terms = {}
618
+ fid = h5py.File(os.path.join(tb_folder, read_file), 'r')
619
+ for k, v in fid.items():
620
+ index_i = int(k) # index_i is 0-based index
621
+ read_terms[index_i] = torch.tensor(v[...], dtype=default_dtype_torch)
622
+ fid.close()
623
+ read_terms_dict[graph_key] = read_terms
624
+ else:
625
+ if interface == 'h5' or interface == 'h5_rc_only':
626
+ atom_num_orbital = load_orbital_types(os.path.join(tb_folder, 'orbital_types.dat'))
627
+
628
+ if interface == 'h5':
629
+ with open(os.path.join(tb_folder, 'info.json'), 'r') as info_f:
630
+ info_dict = json.load(info_f)
631
+ spinful = info_dict["isspinful"]
632
+
633
+ if interface == 'h5':
634
+ if target == 'hamiltonian':
635
+ read_file_list = ['rh.h5']
636
+ graph_key_list = ['term_real']
637
+ elif target == 'phiVdphi':
638
+ read_file_list = ['rphiVdphi.h5']
639
+ graph_key_list = ['term_real']
640
+ elif target == 'density_matrix':
641
+ read_file_list = ['rdm.h5']
642
+ graph_key_list = ['term_real']
643
+ elif target == 'O_ij':
644
+ read_file_list = ['rh.h5', 'rdm.h5', 'rvna.h5', 'rvdee.h5', 'rvxc.h5']
645
+ graph_key_list = ['rh', 'rdm', 'rvna', 'rvdee', 'rvxc']
646
+ else:
647
+ raise ValueError('Unknown prediction target: {}'.format(target))
648
+ read_terms_dict = {}
649
+ for read_file, graph_key in zip(read_file_list, graph_key_list):
650
+ read_terms = {}
651
+ fid = h5py.File(os.path.join(tb_folder, read_file), 'r')
652
+ for k, v in fid.items():
653
+ key = json.loads(k)
654
+ key = (key[0], key[1], key[2], key[3] - 1, key[4] - 1)
655
+ if spinful:
656
+ num_orbital_row = atom_num_orbital[key[3]]
657
+ num_orbital_column = atom_num_orbital[key[4]]
658
+ # soc block order:
659
+ # 1 3
660
+ # 4 2
661
+ if target == 'phiVdphi':
662
+ raise NotImplementedError
663
+ else:
664
+ read_value = torch.stack([
665
+ torch.tensor(v[:num_orbital_row, :num_orbital_column].real, dtype=default_dtype_torch),
666
+ torch.tensor(v[:num_orbital_row, :num_orbital_column].imag, dtype=default_dtype_torch),
667
+ torch.tensor(v[num_orbital_row:, num_orbital_column:].real, dtype=default_dtype_torch),
668
+ torch.tensor(v[num_orbital_row:, num_orbital_column:].imag, dtype=default_dtype_torch),
669
+ torch.tensor(v[:num_orbital_row, num_orbital_column:].real, dtype=default_dtype_torch),
670
+ torch.tensor(v[:num_orbital_row, num_orbital_column:].imag, dtype=default_dtype_torch),
671
+ torch.tensor(v[num_orbital_row:, :num_orbital_column].real, dtype=default_dtype_torch),
672
+ torch.tensor(v[num_orbital_row:, :num_orbital_column].imag, dtype=default_dtype_torch)
673
+ ], dim=-1)
674
+ read_terms[key] = read_value
675
+ else:
676
+ read_terms[key] = torch.tensor(v[...], dtype=default_dtype_torch)
677
+ read_terms_dict[graph_key] = read_terms
678
+ fid.close()
679
+
680
+ local_rotation_dict = {}
681
+ if if_require_grad:
682
+ fid = fid_rc
683
+ else:
684
+ fid = h5py.File(os.path.join(tb_folder, 'rc.h5'), 'r')
685
+ for k, v in fid.items():
686
+ key = json.loads(k)
687
+ key = (key[0], key[1], key[2], key[3] - 1, key[4] - 1) # (R, i, j) i and j is 0-based index
688
+ if if_require_grad:
689
+ local_rotation_dict[key] = v
690
+ else:
691
+ local_rotation_dict[key] = torch.tensor(v[...], dtype=default_dtype_torch)
692
+ if not if_require_grad:
693
+ fid.close()
694
+
695
+ max_num_orbital = max(atom_num_orbital)
696
+
697
+ elif interface == 'npz' or interface == 'npz_rc_only':
698
+ spinful = False
699
+ atom_num_orbital = load_orbital_types(os.path.join(tb_folder, 'orbital_types.dat'))
700
+
701
+ if interface == 'npz':
702
+ graph_key_list = ['term_real']
703
+ read_terms_dict = {'term_real': {}}
704
+ hopping_dict_read = np.load(os.path.join(tb_folder, 'rh.npz'))
705
+ for k, v in hopping_dict_read.items():
706
+ key = json.loads(k)
707
+ key = (key[0], key[1], key[2], key[3] - 1, key[4] - 1) # (R, i, j) i and j is 0-based index
708
+ read_terms_dict['term_real'][key] = torch.tensor(v, dtype=default_dtype_torch)
709
+
710
+ local_rotation_dict = {}
711
+ local_rotation_dict_read = np.load(os.path.join(tb_folder, 'rc.npz'))
712
+ for k, v in local_rotation_dict_read.items():
713
+ key = json.loads(k)
714
+ key = (key[0], key[1], key[2], key[3] - 1, key[4] - 1)
715
+ local_rotation_dict[key] = torch.tensor(v, dtype=default_dtype_torch)
716
+
717
+ max_num_orbital = max(atom_num_orbital)
718
+ else:
719
+ raise ValueError(f'Unknown interface: {interface}')
720
+
721
+ if target == 'E_i':
722
+ term_dict = {}
723
+ onsite_term_dict = {}
724
+ for graph_key in graph_key_list:
725
+ term_dict[graph_key] = torch.full([numbers.shape[0], 1], np.nan, dtype=default_dtype_torch)
726
+ for index_atom in range(numbers.shape[0]):
727
+ assert index_atom in read_terms_dict[graph_key_list[0]]
728
+ for graph_key in graph_key_list:
729
+ term_dict[graph_key][index_atom] = read_terms_dict[graph_key][index_atom]
730
+ subgraph = None
731
+ else:
732
+ if interface == 'h5_rc_only' or interface == 'npz_rc_only':
733
+ local_rotation = []
734
+ else:
735
+ term_dict = {}
736
+ onsite_term_dict = {}
737
+ if target == 'E_ij':
738
+ for graph_key in graph_key_list:
739
+ term_dict[graph_key] = torch.full([edge_fea.shape[0], 1], np.nan, dtype=default_dtype_torch)
740
+ local_rotation = []
741
+ if separate_onsite is True:
742
+ for graph_key in graph_key_list:
743
+ onsite_term_dict['onsite_' + graph_key] = torch.full([numbers.shape[0], 1], np.nan, dtype=default_dtype_torch)
744
+ else:
745
+ term_mask = torch.zeros(edge_fea.shape[0], dtype=torch.bool)
746
+ for graph_key in graph_key_list:
747
+ if spinful:
748
+ term_dict[graph_key] = torch.full([edge_fea.shape[0], max_num_orbital, max_num_orbital, 8],
749
+ np.nan, dtype=default_dtype_torch)
750
+ else:
751
+ if target == 'phiVdphi':
752
+ term_dict[graph_key] = torch.full([edge_fea.shape[0], max_num_orbital, max_num_orbital, 3],
753
+ np.nan, dtype=default_dtype_torch)
754
+ else:
755
+ term_dict[graph_key] = torch.full([edge_fea.shape[0], max_num_orbital, max_num_orbital],
756
+ np.nan, dtype=default_dtype_torch)
757
+ local_rotation = []
758
+ if separate_onsite is True:
759
+ for graph_key in graph_key_list:
760
+ if spinful:
761
+ onsite_term_dict['onsite_' + graph_key] = torch.full(
762
+ [numbers.shape[0], max_num_orbital, max_num_orbital, 8],
763
+ np.nan, dtype=default_dtype_torch)
764
+ else:
765
+ if target == 'phiVdphi':
766
+ onsite_term_dict['onsite_' + graph_key] = torch.full(
767
+ [numbers.shape[0], max_num_orbital, max_num_orbital, 3],
768
+ np.nan, dtype=default_dtype_torch)
769
+ else:
770
+ onsite_term_dict['onsite_' + graph_key] = torch.full(
771
+ [numbers.shape[0], max_num_orbital, max_num_orbital],
772
+ np.nan, dtype=default_dtype_torch)
773
+
774
+ inv_lattice = torch.inverse(lattice).type(default_dtype_torch)
775
+ for index_edge in range(edge_fea.shape[0]):
776
+ # h_{i0, jR} i and j is 0-based index
777
+ R = torch.round(edge_fea[index_edge, 4:7].cpu() @ inv_lattice - edge_fea[index_edge, 7:10].cpu() @ inv_lattice).int().tolist()
778
+ i, j = edge_idx[:, index_edge]
779
+
780
+ key_term = (*R, i.item(), j.item())
781
+ if interface == 'h5_rc_only' or interface == 'npz_rc_only':
782
+ local_rotation.append(local_rotation_dict[key_term])
783
+ else:
784
+ if key_term in read_terms_dict[graph_key_list[0]]:
785
+ for graph_key in graph_key_list:
786
+ if target == 'E_ij':
787
+ term_dict[graph_key][index_edge] = read_terms_dict[graph_key][key_term]
788
+ else:
789
+ term_mask[index_edge] = True
790
+ if spinful:
791
+ term_dict[graph_key][index_edge, :atom_num_orbital[i], :atom_num_orbital[j], :] = read_terms_dict[graph_key][key_term]
792
+ else:
793
+ term_dict[graph_key][index_edge, :atom_num_orbital[i], :atom_num_orbital[j]] = read_terms_dict[graph_key][key_term]
794
+ local_rotation.append(local_rotation_dict[key_term])
795
+ else:
796
+ raise NotImplementedError(
797
+ "Not yet have support for graph radius including hopping without calculation")
798
+
799
+ if separate_onsite is True and interface != 'h5_rc_only' and interface != 'npz_rc_only':
800
+ for index_atom in range(numbers.shape[0]):
801
+ key_term = (0, 0, 0, index_atom, index_atom)
802
+ assert key_term in read_terms_dict[graph_key_list[0]]
803
+ for graph_key in graph_key_list:
804
+ if target == 'E_ij':
805
+ onsite_term_dict['onsite_' + graph_key][index_atom] = read_terms_dict[graph_key][key_term]
806
+ else:
807
+ if spinful:
808
+ onsite_term_dict['onsite_' + graph_key][index_atom, :atom_num_orbital[i], :atom_num_orbital[j], :] = \
809
+ read_terms_dict[graph_key][key_term]
810
+ else:
811
+ onsite_term_dict['onsite_' + graph_key][index_atom, :atom_num_orbital[i], :atom_num_orbital[j]] = \
812
+ read_terms_dict[graph_key][key_term]
813
+
814
+ if if_lcmp_graph:
815
+ local_rotation = torch.stack(local_rotation, dim=0)
816
+ assert local_rotation.shape[0] == edge_fea.shape[0]
817
+ r_vec = edge_fea[:, 1:4] - edge_fea[:, 4:7]
818
+ r_vec = r_vec.unsqueeze(1)
819
+ if huge_structure is False:
820
+ r_vec = torch.matmul(r_vec[:, None, :, :], local_rotation[None, :, :, :].to(r_vec.device)).reshape(-1, 3)
821
+ if if_new_sp:
822
+ r_vec = torch.nn.functional.normalize(r_vec, dim=-1)
823
+ angular_expansion = _spherical_harmonics(num_l - 1, -r_vec[..., 2], r_vec[..., 0],
824
+ r_vec[..., 1])
825
+ angular_expansion.mul_(torch.cat([
826
+ (math.sqrt(2 * l + 1) / math.sqrt(4 * math.pi)) * torch.ones(2 * l + 1,
827
+ dtype=angular_expansion.dtype,
828
+ device=angular_expansion.device)
829
+ for l in range(num_l)
830
+ ]))
831
+ angular_expansion = angular_expansion.reshape(edge_fea.shape[0], edge_fea.shape[0], -1)
832
+ else:
833
+ r_vec_sp = get_spherical_from_cartesian(r_vec)
834
+ sph_harm_func = SphericalHarmonics()
835
+ angular_expansion = []
836
+ for l in range(num_l):
837
+ angular_expansion.append(sph_harm_func.get(l, r_vec_sp[:, 0], r_vec_sp[:, 1]))
838
+ angular_expansion = torch.cat(angular_expansion, dim=-1).reshape(edge_fea.shape[0], edge_fea.shape[0], -1)
839
+
840
+ subgraph_atom_idx_list = []
841
+ subgraph_edge_idx_list = []
842
+ subgraph_edge_ang_list = []
843
+ subgraph_index = []
844
+ index_cursor = 0
845
+
846
+ for index in range(edge_fea.shape[0]):
847
+ # h_{i0, jR}
848
+ i, j = edge_idx[:, index]
849
+ subgraph_atom_idx = torch.stack([i.repeat(len(atom_idx_connect[i])), atom_idx_connect[i]]).T
850
+ subgraph_edge_idx = torch.LongTensor(edge_idx_connect[i])
851
+ if huge_structure:
852
+ r_vec_tmp = torch.matmul(r_vec[subgraph_edge_idx, :, :], local_rotation[index, :, :].to(r_vec.device)).reshape(-1, 3)
853
+ if if_new_sp:
854
+ r_vec_tmp = torch.nn.functional.normalize(r_vec_tmp, dim=-1)
855
+ subgraph_edge_ang = _spherical_harmonics(num_l - 1, -r_vec_tmp[..., 2], r_vec_tmp[..., 0], r_vec_tmp[..., 1])
856
+ subgraph_edge_ang.mul_(torch.cat([
857
+ (math.sqrt(2 * l + 1) / math.sqrt(4 * math.pi)) * torch.ones(2 * l + 1,
858
+ dtype=subgraph_edge_ang.dtype,
859
+ device=subgraph_edge_ang.device)
860
+ for l in range(num_l)
861
+ ]))
862
+ else:
863
+ r_vec_sp = get_spherical_from_cartesian(r_vec_tmp)
864
+ sph_harm_func = SphericalHarmonics()
865
+ angular_expansion = []
866
+ for l in range(num_l):
867
+ angular_expansion.append(sph_harm_func.get(l, r_vec_sp[:, 0], r_vec_sp[:, 1]))
868
+ subgraph_edge_ang = torch.cat(angular_expansion, dim=-1).reshape(-1, num_l ** 2)
869
+ else:
870
+ subgraph_edge_ang = angular_expansion[subgraph_edge_idx, index, :]
871
+
872
+ subgraph_atom_idx_list.append(subgraph_atom_idx)
873
+ subgraph_edge_idx_list.append(subgraph_edge_idx)
874
+ subgraph_edge_ang_list.append(subgraph_edge_ang)
875
+ subgraph_index += [index_cursor] * len(atom_idx_connect[i])
876
+ index_cursor += 1
877
+
878
+ subgraph_atom_idx = torch.stack([j.repeat(len(atom_idx_connect[j])), atom_idx_connect[j]]).T
879
+ subgraph_edge_idx = torch.LongTensor(edge_idx_connect[j])
880
+ if huge_structure:
881
+ r_vec_tmp = torch.matmul(r_vec[subgraph_edge_idx, :, :], local_rotation[index, :, :].to(r_vec.device)).reshape(-1, 3)
882
+ if if_new_sp:
883
+ r_vec_tmp = torch.nn.functional.normalize(r_vec_tmp, dim=-1)
884
+ subgraph_edge_ang = _spherical_harmonics(num_l - 1, -r_vec_tmp[..., 2], r_vec_tmp[..., 0], r_vec_tmp[..., 1])
885
+ subgraph_edge_ang.mul_(torch.cat([
886
+ (math.sqrt(2 * l + 1) / math.sqrt(4 * math.pi)) * torch.ones(2 * l + 1,
887
+ dtype=subgraph_edge_ang.dtype,
888
+ device=subgraph_edge_ang.device)
889
+ for l in range(num_l)
890
+ ]))
891
+ else:
892
+ r_vec_sp = get_spherical_from_cartesian(r_vec_tmp)
893
+ sph_harm_func = SphericalHarmonics()
894
+ angular_expansion = []
895
+ for l in range(num_l):
896
+ angular_expansion.append(sph_harm_func.get(l, r_vec_sp[:, 0], r_vec_sp[:, 1]))
897
+ subgraph_edge_ang = torch.cat(angular_expansion, dim=-1).reshape(-1, num_l ** 2)
898
+ else:
899
+ subgraph_edge_ang = angular_expansion[subgraph_edge_idx, index, :]
900
+ subgraph_atom_idx_list.append(subgraph_atom_idx)
901
+ subgraph_edge_idx_list.append(subgraph_edge_idx)
902
+ subgraph_edge_ang_list.append(subgraph_edge_ang)
903
+ subgraph_index += [index_cursor] * len(atom_idx_connect[j])
904
+ index_cursor += 1
905
+ subgraph = {"subgraph_atom_idx":torch.cat(subgraph_atom_idx_list, dim=0),
906
+ "subgraph_edge_idx":torch.cat(subgraph_edge_idx_list, dim=0),
907
+ "subgraph_edge_ang":torch.cat(subgraph_edge_ang_list, dim=0),
908
+ "subgraph_index":torch.LongTensor(subgraph_index)}
909
+ else:
910
+ subgraph = None
911
+
912
+ if interface == 'h5_rc_only' or interface == 'npz_rc_only':
913
+ data = Data(x=numbers, edge_index=edge_idx, edge_attr=edge_fea, stru_id=stru_id, term_mask=None,
914
+ term_real=None, onsite_term_real=None,
915
+ atom_num_orbital=torch.tensor(atom_num_orbital),
916
+ subgraph_dict=subgraph,
917
+ **kwargs)
918
+ else:
919
+ if target == 'E_ij' or target == 'E_i':
920
+ data = Data(x=numbers, edge_index=edge_idx, edge_attr=edge_fea, stru_id=stru_id,
921
+ **term_dict, **onsite_term_dict,
922
+ subgraph_dict=subgraph,
923
+ spinful=False,
924
+ **kwargs)
925
+ else:
926
+ data = Data(x=numbers, edge_index=edge_idx, edge_attr=edge_fea, stru_id=stru_id, term_mask=term_mask,
927
+ **term_dict, **onsite_term_dict,
928
+ atom_num_orbital=torch.tensor(atom_num_orbital),
929
+ subgraph_dict=subgraph,
930
+ spinful=spinful,
931
+ **kwargs)
932
+ else:
933
+ data = Data(x=numbers, edge_index=edge_idx, edge_attr=edge_fea, stru_id=stru_id, **kwargs)
934
+ return data
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .pred_ham import predict, predict_with_grad
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (230 Bytes). View file
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/__pycache__/pred_ham.cpython-312.pyc ADDED
Binary file (28.8 kB). View file
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/band_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "calc_job": "band",
3
+ "which_k": 0,
4
+ "fermi_level": -3.82373,
5
+ "max_iter": 300,
6
+ "num_band": 50,
7
+ "k_data": ["15 0 0 0 0.5 0.5 0 Γ M", "15 0.5 0.5 0 0.3333333333333333 0.6666666666666667 0 M K", "15 0.3333333333333333 0.6666666666666667 0 0 0 0 K Γ"]
8
+ }
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/dense_calc.jl ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ using DelimitedFiles, LinearAlgebra, JSON
2
+ using HDF5
3
+ using ArgParse
4
+ using SparseArrays
5
+ using Arpack
6
+ using JLD
7
+ # BLAS.set_num_threads(1)
8
+
9
+ const ev2Hartree = 0.036749324533634074
10
+ const Bohr2Ang = 0.529177249
11
+ const default_dtype = Complex{Float64}
12
+
13
+
14
+ function parse_commandline()
15
+ s = ArgParseSettings()
16
+ @add_arg_table! s begin
17
+ "--input_dir", "-i"
18
+ help = "path of rlat.dat, orbital_types.dat, site_positions.dat, hamiltonians_pred.h5, and overlaps.h5"
19
+ arg_type = String
20
+ default = "./"
21
+ "--output_dir", "-o"
22
+ help = "path of output openmx.Band"
23
+ arg_type = String
24
+ default = "./"
25
+ "--config"
26
+ help = "config file in the format of JSON"
27
+ arg_type = String
28
+ "--ill_project"
29
+ help = "projects out the eigenvectors of the overlap matrix that correspond to eigenvalues smaller than ill_threshold"
30
+ arg_type = Bool
31
+ default = true
32
+ "--ill_threshold"
33
+ help = "threshold for ill_project"
34
+ arg_type = Float64
35
+ default = 5e-4
36
+ end
37
+ return parse_args(s)
38
+ end
39
+
40
+
41
+ function _create_dict_h5(filename::String)
42
+ fid = h5open(filename, "r")
43
+ T = eltype(fid[keys(fid)[1]])
44
+ d_out = Dict{Array{Int64,1}, Array{T, 2}}()
45
+ for key in keys(fid)
46
+ data = read(fid[key])
47
+ nk = map(x -> parse(Int64, convert(String, x)), split(key[2 : length(key) - 1], ','))
48
+ d_out[nk] = permutedims(data)
49
+ end
50
+ close(fid)
51
+ return d_out
52
+ end
53
+
54
+
55
+ function genlist(x)
56
+ return collect(range(x[1], stop = x[2], length = Int64(x[3])))
57
+ end
58
+
59
+
60
+ function k_data2num_ks(kdata::AbstractString)
61
+ return parse(Int64,split(kdata)[1])
62
+ end
63
+
64
+
65
+ function k_data2kpath(kdata::AbstractString)
66
+ return map(x->parse(Float64,x), split(kdata)[2:7])
67
+ end
68
+
69
+
70
+ function std_out_array(a::AbstractArray)
71
+ return string(map(x->string(x," "),a)...)
72
+ end
73
+
74
+
75
+ function main()
76
+ parsed_args = parse_commandline()
77
+
78
+ println(parsed_args["config"])
79
+ config = JSON.parsefile(parsed_args["config"])
80
+ calc_job = config["calc_job"]
81
+
82
+ if isfile(joinpath(parsed_args["input_dir"],"info.json"))
83
+ spinful = JSON.parsefile(joinpath(parsed_args["input_dir"],"info.json"))["isspinful"]
84
+ else
85
+ spinful = false
86
+ end
87
+
88
+ site_positions = readdlm(joinpath(parsed_args["input_dir"], "site_positions.dat"))
89
+ nsites = size(site_positions, 2)
90
+
91
+ orbital_types_f = open(joinpath(parsed_args["input_dir"], "orbital_types.dat"), "r")
92
+ site_norbits = zeros(nsites)
93
+ orbital_types = Vector{Vector{Int64}}()
94
+ for index_site = 1:nsites
95
+ orbital_type = parse.(Int64, split(readline(orbital_types_f)))
96
+ push!(orbital_types, orbital_type)
97
+ end
98
+ site_norbits = (x->sum(x .* 2 .+ 1)).(orbital_types) * (1 + spinful)
99
+ norbits = sum(site_norbits)
100
+ site_norbits_cumsum = cumsum(site_norbits)
101
+
102
+ rlat = readdlm(joinpath(parsed_args["input_dir"], "rlat.dat"))
103
+
104
+
105
+ @info "read h5"
106
+ begin_time = time()
107
+ hamiltonians_pred = _create_dict_h5(joinpath(parsed_args["input_dir"], "hamiltonians_pred.h5"))
108
+ overlaps = _create_dict_h5(joinpath(parsed_args["input_dir"], "overlaps.h5"))
109
+ println("Time for reading h5: ", time() - begin_time, "s")
110
+
111
+ H_R = Dict{Vector{Int64}, Matrix{default_dtype}}()
112
+ S_R = Dict{Vector{Int64}, Matrix{default_dtype}}()
113
+
114
+ @info "construct Hamiltonian and overlap matrix in the real space"
115
+ begin_time = time()
116
+ for key in collect(keys(hamiltonians_pred))
117
+ hamiltonian_pred = hamiltonians_pred[key]
118
+ if (key ∈ keys(overlaps))
119
+ overlap = overlaps[key]
120
+ else
121
+ # continue
122
+ overlap = zero(hamiltonian_pred)
123
+ end
124
+ if spinful
125
+ overlap = vcat(hcat(overlap,zeros(size(overlap))),hcat(zeros(size(overlap)),overlap)) # the readout overlap matrix only contains the upper-left block # TODO maybe drop the zeros?
126
+ end
127
+ R = key[1:3]; atom_i=key[4]; atom_j=key[5]
128
+
129
+ @assert (site_norbits[atom_i], site_norbits[atom_j]) == size(hamiltonian_pred)
130
+ @assert (site_norbits[atom_i], site_norbits[atom_j]) == size(overlap)
131
+ if !(R ∈ keys(H_R))
132
+ H_R[R] = zeros(default_dtype, norbits, norbits)
133
+ S_R[R] = zeros(default_dtype, norbits, norbits)
134
+ end
135
+ for block_matrix_i in 1:site_norbits[atom_i]
136
+ for block_matrix_j in 1:site_norbits[atom_j]
137
+ index_i = site_norbits_cumsum[atom_i] - site_norbits[atom_i] + block_matrix_i
138
+ index_j = site_norbits_cumsum[atom_j] - site_norbits[atom_j] + block_matrix_j
139
+ H_R[R][index_i, index_j] = hamiltonian_pred[block_matrix_i, block_matrix_j]
140
+ S_R[R][index_i, index_j] = overlap[block_matrix_i, block_matrix_j]
141
+ end
142
+ end
143
+ end
144
+ println("Time for constructing Hamiltonian and overlap matrix in the real space: ", time() - begin_time, " s")
145
+
146
+
147
+ if calc_job == "band"
148
+ fermi_level = config["fermi_level"]
149
+ k_data = config["k_data"]
150
+
151
+ ill_project = parsed_args["ill_project"] || ("ill_project" in keys(config) && config["ill_project"])
152
+ ill_threshold = max(parsed_args["ill_threshold"], get(config, "ill_threshold", 0.))
153
+
154
+ @info "calculate bands"
155
+ num_ks = k_data2num_ks.(k_data)
156
+ kpaths = k_data2kpath.(k_data)
157
+
158
+ egvals = zeros(Float64, norbits, sum(num_ks)[1])
159
+
160
+ begin_time = time()
161
+ idx_k = 1
162
+ for i = 1:size(kpaths, 1)
163
+ kpath = kpaths[i]
164
+ pnkpts = num_ks[i]
165
+ kxs = LinRange(kpath[1], kpath[4], pnkpts)
166
+ kys = LinRange(kpath[2], kpath[5], pnkpts)
167
+ kzs = LinRange(kpath[3], kpath[6], pnkpts)
168
+ for (kx, ky, kz) in zip(kxs, kys, kzs)
169
+ idx_k
170
+ H_k = zeros(default_dtype, norbits, norbits)
171
+ S_k = zeros(default_dtype, norbits, norbits)
172
+ for R in keys(H_R)
173
+ H_k += H_R[R] * exp(im*2π*([kx, ky, kz]⋅R))
174
+ S_k += S_R[R] * exp(im*2π*([kx, ky, kz]⋅R))
175
+ end
176
+ S_k = (S_k + S_k') / 2
177
+ H_k = (H_k + H_k') / 2
178
+ if ill_project
179
+ (egval_S, egvec_S) = eigen(Hermitian(S_k))
180
+ # egvec_S: shape (num_basis, num_bands)
181
+ project_index = abs.(egval_S) .> ill_threshold
182
+ if sum(project_index) != length(project_index)
183
+ # egval_S = egval_S[project_index]
184
+ egvec_S = egvec_S[:, project_index]
185
+ @warn "ill-conditioned eigenvalues detected, projected out $(length(project_index) - sum(project_index)) eigenvalues"
186
+ H_k = egvec_S' * H_k * egvec_S
187
+ S_k = egvec_S' * S_k * egvec_S
188
+ (egval, egvec) = eigen(Hermitian(H_k), Hermitian(S_k))
189
+ egval = vcat(egval, fill(1e4, length(project_index) - sum(project_index)))
190
+ egvec = egvec_S * egvec
191
+ else
192
+ (egval, egvec) = eigen(Hermitian(H_k), Hermitian(S_k))
193
+ end
194
+ else
195
+ (egval, egvec) = eigen(Hermitian(H_k), Hermitian(S_k))
196
+ end
197
+ egvals[:, idx_k] = egval
198
+ println("Time for solving No.$idx_k eigenvalues at k = ", [kx, ky, kz], ": ", time() - begin_time, " s")
199
+ idx_k += 1
200
+ end
201
+ end
202
+
203
+ # output in openmx band format
204
+ f = open(joinpath(parsed_args["output_dir"], "openmx.Band"),"w")
205
+ println(f, norbits, " ", 0, " ", ev2Hartree * fermi_level)
206
+ openmx_rlat = reshape((rlat .* Bohr2Ang), 1, :)
207
+ println(f, std_out_array(openmx_rlat))
208
+ println(f, length(k_data))
209
+ for line in k_data
210
+ println(f,line)
211
+ end
212
+ idx_k = 1
213
+ for i = 1:size(kpaths, 1)
214
+ pnkpts = num_ks[i]
215
+ kstart = kpaths[i][1:3]
216
+ kend = kpaths[i][4:6]
217
+ k_list = zeros(Float64,pnkpts,3)
218
+ for alpha = 1:3
219
+ k_list[:,alpha] = genlist([kstart[alpha],kend[alpha],pnkpts])
220
+ end
221
+ for j = 1:pnkpts
222
+ idx_k
223
+ kvec = k_list[j,:]
224
+ println(f, norbits, " ", std_out_array(kvec))
225
+ println(f, std_out_array(ev2Hartree * egvals[:, idx_k]))
226
+ idx_k += 1
227
+ end
228
+ end
229
+ close(f)
230
+ end
231
+ end
232
+
233
+
234
+ main()
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/dense_calc.py ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import argparse
3
+ import h5py
4
+ import numpy as np
5
+ import os
6
+ from time import time
7
+ from scipy import linalg
8
+ import tqdm
9
+ from pathos.multiprocessing import ProcessingPool as Pool
10
+
11
+ def parse_commandline():
12
+ parser = argparse.ArgumentParser()
13
+ parser.add_argument(
14
+ "--input_dir", "-i", type=str, default="./",
15
+ help="path of rlat.dat, orbital_types.dat, site_positions.dat, hamiltonians_pred.h5, and overlaps.h5"
16
+ )
17
+ parser.add_argument(
18
+ "--output_dir", "-o", type=str, default="./",
19
+ help="path of output openmx.Band"
20
+ )
21
+ parser.add_argument(
22
+ "--config", type=str,
23
+ help="config file in the format of JSON"
24
+ )
25
+ parser.add_argument(
26
+ "--ill_project", type=bool,
27
+ help="projects out the eigenvectors of the overlap matrix that correspond to eigenvalues smaller than ill_threshold",
28
+ default=True
29
+ )
30
+ parser.add_argument(
31
+ "--ill_threshold", type=float,
32
+ help="threshold for ill_project",
33
+ default=5e-4
34
+ )
35
+ parser.add_argument(
36
+ "--multiprocessing", type=int,
37
+ help="multiprocessing for band calculation",
38
+ default=0
39
+ )
40
+ return parser.parse_args()
41
+
42
+ parsed_args = parse_commandline()
43
+
44
+ def _create_dict_h5(filename):
45
+ fid = h5py.File(filename, "r")
46
+ d_out = {}
47
+ for key in fid.keys():
48
+ data = np.array(fid[key])
49
+ nk = tuple(map(int, key[1:-1].split(',')))
50
+ # BS:
51
+ # the matrix do not need be transposed in Python,
52
+ # But the transpose should be done in Julia.
53
+ d_out[nk] = data # np.transpose(data)
54
+ fid.close()
55
+ return d_out
56
+
57
+
58
+ ev2Hartree = 0.036749324533634074
59
+ Bohr2Ang = 0.529177249
60
+
61
+
62
+ def genlist(x):
63
+ return np.linspace(x[0], x[1], int(x[2]))
64
+
65
+
66
+ def k_data2num_ks(kdata):
67
+ return int(kdata.split()[0])
68
+
69
+
70
+ def k_data2kpath(kdata):
71
+ return [float(x) for x in kdata.split()[1:7]]
72
+
73
+
74
+ def std_out_array(a):
75
+ return ''.join([str(x) + ' ' for x in a])
76
+
77
+
78
+ default_dtype = np.complex128
79
+
80
+ print(parsed_args.config)
81
+ with open(parsed_args.config) as f:
82
+ config = json.load(f)
83
+ calc_job = config["calc_job"]
84
+
85
+ if os.path.isfile(os.path.join(parsed_args.input_dir, "info.json")):
86
+ with open(os.path.join(parsed_args.input_dir, "info.json")) as f:
87
+ spinful = json.load(f)["isspinful"]
88
+ else:
89
+ spinful = False
90
+
91
+ site_positions = np.loadtxt(os.path.join(parsed_args.input_dir, "site_positions.dat"))
92
+
93
+ if len(site_positions.shape) == 2:
94
+ nsites = site_positions.shape[1]
95
+ else:
96
+ nsites = 1
97
+ # in case of single atom
98
+
99
+
100
+ with open(os.path.join(parsed_args.input_dir, "orbital_types.dat")) as f:
101
+ site_norbits = np.zeros(nsites, dtype=int)
102
+ orbital_types = []
103
+ for index_site in range(nsites):
104
+ orbital_type = list(map(int, f.readline().split()))
105
+ orbital_types.append(orbital_type)
106
+ site_norbits[index_site] = np.sum(np.array(orbital_type) * 2 + 1)
107
+ norbits = np.sum(site_norbits)
108
+ site_norbits_cumsum = np.cumsum(site_norbits)
109
+
110
+ rlat = np.loadtxt(os.path.join(parsed_args.input_dir, "rlat.dat")).T
111
+ # require transposition while reading rlat.dat in python
112
+
113
+
114
+ print("read h5")
115
+ begin_time = time()
116
+ hamiltonians_pred = _create_dict_h5(os.path.join(parsed_args.input_dir, "hamiltonians_pred.h5"))
117
+ overlaps = _create_dict_h5(os.path.join(parsed_args.input_dir, "overlaps.h5"))
118
+ print("Time for reading h5: ", time() - begin_time, "s")
119
+
120
+ H_R = {}
121
+ S_R = {}
122
+
123
+ print("construct Hamiltonian and overlap matrix in the real space")
124
+ begin_time = time()
125
+
126
+ # BS:
127
+ # this is for debug python and julia
128
+ # in julia, you can use 'sort(collect(keys(hamiltonians_pred)))'
129
+ # for key in dict(sorted(hamiltonians_pred.items())).keys():
130
+ for key in hamiltonians_pred.keys():
131
+
132
+ hamiltonian_pred = hamiltonians_pred[key]
133
+
134
+ if key in overlaps.keys():
135
+ overlap = overlaps[key]
136
+ else:
137
+ overlap = np.zeros_like(hamiltonian_pred)
138
+ if spinful:
139
+ overlap = np.vstack((np.hstack((overlap, np.zeros_like(overlap))), np.hstack((np.zeros_like(overlap), overlap))))
140
+ R = key[:3]
141
+ atom_i = key[3] - 1
142
+ atom_j = key[4] - 1
143
+
144
+ assert (site_norbits[atom_i], site_norbits[atom_j]) == hamiltonian_pred.shape
145
+ assert (site_norbits[atom_i], site_norbits[atom_j]) == overlap.shape
146
+
147
+ if R not in H_R.keys():
148
+ H_R[R] = np.zeros((norbits, norbits), dtype=default_dtype)
149
+ S_R[R] = np.zeros((norbits, norbits), dtype=default_dtype)
150
+
151
+ for block_matrix_i in range(1, site_norbits[atom_i]+1):
152
+ for block_matrix_j in range(1, site_norbits[atom_j]+1):
153
+ index_i = site_norbits_cumsum[atom_i] - site_norbits[atom_i] + block_matrix_i - 1
154
+ index_j = site_norbits_cumsum[atom_j] - site_norbits[atom_j] + block_matrix_j - 1
155
+ H_R[R][index_i, index_j] = hamiltonian_pred[block_matrix_i-1, block_matrix_j-1]
156
+ S_R[R][index_i, index_j] = overlap[block_matrix_i-1, block_matrix_j-1]
157
+
158
+
159
+ print("Time for constructing Hamiltonian and overlap matrix in the real space: ", time() - begin_time, " s")
160
+
161
+ if calc_job == "band":
162
+ fermi_level = config["fermi_level"]
163
+ k_data = config["k_data"]
164
+ ill_project = parsed_args.ill_project or ("ill_project" in config.keys() and config["ill_project"])
165
+ ill_threshold = max(parsed_args.ill_threshold, config["ill_threshold"] if ("ill_threshold" in config.keys()) else 0.)
166
+ multiprocessing = max(parsed_args.multiprocessing, config["multiprocessing"] if ("multiprocessing" in config.keys()) else 0)
167
+
168
+ print("calculate bands")
169
+ num_ks = [k_data2num_ks(k) for k in k_data]
170
+ kpaths = [k_data2kpath(k) for k in k_data]
171
+
172
+ egvals = np.zeros((norbits, sum(num_ks)))
173
+
174
+ begin_time = time()
175
+ idx_k = 0
176
+ # calculate total k points
177
+ total_num_ks = sum(num_ks)
178
+ list_index_kpath= []
179
+ list_index_kxyz=[]
180
+ for i in range(len(num_ks)):
181
+ list_index_kpath = list_index_kpath + ([i]*num_ks[i])
182
+ list_index_kxyz.extend(range(num_ks[i]))
183
+
184
+ def process_worker(k_point):
185
+ """ calculate band
186
+
187
+ Args:
188
+ k_point (int): the index of k point of all calculated k points
189
+
190
+ Returns:
191
+ json: {
192
+ "k_point":k_point,
193
+ "egval" (np array 1D) : eigen value ,
194
+ "num_projected_out" (int) : ill-conditioned eigenvalues detected。 default is 0
195
+ }
196
+ """
197
+ index_kpath = list_index_kpath[k_point]
198
+ kpath = kpaths[index_kpath]
199
+ pnkpts = num_ks[index_kpath]
200
+ kx = np.linspace(kpath[0], kpath[3], pnkpts)[list_index_kxyz[k_point]]
201
+ ky = np.linspace(kpath[1], kpath[4], pnkpts)[list_index_kxyz[k_point]]
202
+ kz = np.linspace(kpath[2], kpath[5], pnkpts)[list_index_kxyz[k_point]]
203
+
204
+ H_k = np.matrix(np.zeros((norbits, norbits), dtype=default_dtype))
205
+ S_k = np.matrix(np.zeros((norbits, norbits), dtype=default_dtype))
206
+ for R in H_R.keys():
207
+ H_k += H_R[R] * np.exp(1j*2*np.pi*np.dot([kx, ky, kz], R))
208
+ S_k += S_R[R] * np.exp(1j*2*np.pi*np.dot([kx, ky, kz], R))
209
+ # print(H_k)
210
+ H_k = (H_k + H_k.getH())/2.
211
+ S_k = (S_k + S_k.getH())/2.
212
+ num_projected_out = 0
213
+ if ill_project:
214
+ egval_S, egvec_S = linalg.eig(S_k)
215
+ project_index = np.argwhere(abs(egval_S)> ill_threshold)
216
+ if len(project_index) != norbits:
217
+ egvec_S = np.matrix(egvec_S[:, project_index])
218
+ num_projected_out = norbits - len(project_index)
219
+ H_k = egvec_S.H @ H_k @ egvec_S
220
+ S_k = egvec_S.H @ S_k @ egvec_S
221
+ egval = linalg.eigvalsh(H_k, S_k, lower=False)
222
+ egval = np.concatenate([egval, np.full(num_projected_out, 1e4)])
223
+ else:
224
+ egval = linalg.eigvalsh(H_k, S_k, lower=False)
225
+ else:
226
+ #---------------------------------------------
227
+ # BS: only eigenvalues are needed in this part,
228
+ # the upper matrix is used
229
+ egval = linalg.eigvalsh(H_k, S_k, lower=False)
230
+
231
+ return {"k_point":k_point, "egval":egval, "num_projected_out":num_projected_out}
232
+
233
+ # parallizing the band calculation
234
+ if multiprocessing == 0:
235
+ print(f'No use of multiprocessing')
236
+ data_list = [process_worker(k_point) for k_point in tqdm.tqdm(range(sum(num_ks)))]
237
+ else:
238
+ pool_dict = {} if multiprocessing < 0 else {'nodes': multiprocessing}
239
+
240
+ with Pool(**pool_dict) as pool:
241
+ nodes = pool.nodes
242
+ print(f'Use multiprocessing x {multiprocessing})')
243
+ data_list = list(tqdm.tqdm(pool.imap(process_worker, range(sum(num_ks))), total=sum(num_ks)))
244
+
245
+ # post-process returned band data, and store them in egvals with the order k_point
246
+ projected_out = []
247
+ for data in data_list:
248
+ egvals[:, data["k_point"]] = data["egval"]
249
+ if data["num_projected_out"] > 0:
250
+ projected_out.append(data["num_projected_out"])
251
+ if len(projected_out) > 0:
252
+ print(f"There are {len(projected_out)} bands with ill-conditioned eigenvalues detected.")
253
+ print(f"Projected out {int(np.average(projected_out))} eigenvalues on average.")
254
+ print('Finish the calculation of %d k-points, have cost %d seconds' % (sum(num_ks), time() - begin_time))
255
+
256
+
257
+ # output in openmx band format
258
+ with open(os.path.join(parsed_args.output_dir, "openmx.Band"), "w") as f:
259
+ f.write("{} {} {}\n".format(norbits, 0, ev2Hartree * fermi_level))
260
+ openmx_rlat = np.reshape((rlat * Bohr2Ang), (1, -1))[0]
261
+ f.write(std_out_array(openmx_rlat) + "\n")
262
+ f.write(str(len(k_data)) + "\n")
263
+ for line in k_data:
264
+ f.write(line + "\n")
265
+ idx_k = 0
266
+ for i in range(len(kpaths)):
267
+ pnkpts = num_ks[i]
268
+ kstart = kpaths[i][:3]
269
+ kend = kpaths[i][3:]
270
+ k_list = np.zeros((pnkpts, 3))
271
+ for alpha in range(3):
272
+ k_list[:, alpha] = genlist([kstart[alpha], kend[alpha], pnkpts])
273
+ for j in range(pnkpts):
274
+ kvec = k_list[j, :]
275
+ f.write("{} {}\n".format(norbits, std_out_array(kvec)))
276
+ f.write(std_out_array(ev2Hartree * egvals[:, idx_k]) + "\n")
277
+ idx_k += 1
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/inference_default.ini ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [basic]
2
+ work_dir = /your/own/path
3
+ OLP_dir = /your/own/path
4
+ interface = openmx
5
+ trained_model_dir = ["/your/trained/model1", "/your/trained/model2"]
6
+ task = [1, 2, 3, 4, 5]
7
+ sparse_calc_config = /your/own/path
8
+ eigen_solver = sparse_jl
9
+ disable_cuda = True
10
+ device = cuda:0
11
+ huge_structure = True
12
+ restore_blocks_py = True
13
+ gen_rc_idx = False
14
+ gen_rc_by_idx =
15
+ with_grad = False
16
+
17
+ [interpreter]
18
+ julia_interpreter = julia
19
+ python_interpreter = python
20
+
21
+ [graph]
22
+ radius = -1.0
23
+ create_from_DFT = True
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/local_coordinate.jl ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ using DelimitedFiles, LinearAlgebra
2
+ using HDF5
3
+ using ArgParse
4
+ using StaticArrays
5
+
6
+
7
+ function parse_commandline()
8
+ s = ArgParseSettings()
9
+ @add_arg_table! s begin
10
+ "--input_dir", "-i"
11
+ help = "path of site_positions.dat, lat.dat, element.dat, and R_list.dat (overlaps.h5)"
12
+ arg_type = String
13
+ default = "./"
14
+ "--output_dir", "-o"
15
+ help = "path of output rc.h5"
16
+ arg_type = String
17
+ default = "./"
18
+ "--radius", "-r"
19
+ help = "cutoff radius"
20
+ arg_type = Float64
21
+ default = 8.0
22
+ "--create_from_DFT"
23
+ help = "retain edges by DFT overlaps neighbour"
24
+ arg_type = Bool
25
+ default = true
26
+ "--output_text"
27
+ help = "an option without argument, i.e. a flag"
28
+ action = :store_true
29
+ "--Hop_dir"
30
+ help = "path of Hop.jl"
31
+ arg_type = String
32
+ default = "/home/lihe/DeepH/process_ham/Hop.jl/"
33
+ end
34
+ return parse_args(s)
35
+ end
36
+ parsed_args = parse_commandline()
37
+
38
+ using Pkg
39
+ Pkg.activate(parsed_args["Hop_dir"])
40
+ using Hop
41
+
42
+
43
+ site_positions = readdlm(joinpath(parsed_args["input_dir"], "site_positions.dat"))
44
+ lat = readdlm(joinpath(parsed_args["input_dir"], "lat.dat"))
45
+ R_list_read = convert(Matrix{Int64}, readdlm(joinpath(parsed_args["input_dir"], "R_list.dat")))
46
+ num_R = size(R_list_read, 1)
47
+ R_list = Vector{SVector{3, Int64}}()
48
+ for index_R in 1:num_R
49
+ push!(R_list, SVector{3, Int64}(R_list_read[index_R, :]))
50
+ end
51
+
52
+ @info "get local coordinate"
53
+ begin_time = time()
54
+ rcoordinate = Hop.Deeph.rotate_system(site_positions, lat, R_list, parsed_args["radius"])
55
+ println("time for calculating local coordinate is: ", time() - begin_time)
56
+
57
+ if parsed_args["output_text"]
58
+ @info "output txt"
59
+ mkpath(joinpath(parsed_args["output_dir"], "rresult"))
60
+ mkpath(joinpath(parsed_args["output_dir"], "rresult/rc"))
61
+ for (R, coord) in rcoordinate
62
+ open(joinpath(parsed_args["output_dir"], "rresult/rc/", R, "_real.dat"), "w") do f
63
+ writedlm(f, coord)
64
+ end
65
+ end
66
+ end
67
+
68
+ @info "output h5"
69
+ h5open(joinpath(parsed_args["input_dir"], "overlaps.h5"), "r") do fid_OLP
70
+ graph_key = Set(keys(fid_OLP))
71
+ h5open(joinpath(parsed_args["output_dir"], "rc.h5"), "w") do fid
72
+ for (key, coord) in rcoordinate
73
+ if (parsed_args["create_from_DFT"] == true) && (!(string(key) in graph_key))
74
+ continue
75
+ end
76
+ write(fid, string(key), permutedims(coord))
77
+ end
78
+ end
79
+ end
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/pred_ham.py ADDED
@@ -0,0 +1,365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import time
4
+ import warnings
5
+ from typing import Union, List
6
+ import sys
7
+
8
+ import tqdm
9
+ from configparser import ConfigParser
10
+ import numpy as np
11
+ from pymatgen.core.structure import Structure
12
+ import torch
13
+ import torch.autograd.forward_ad as fwAD
14
+ import h5py
15
+
16
+ from deeph import get_graph, DeepHKernel, collate_fn, write_ham_h5, load_orbital_types, Rotate, dtype_dict, get_rc
17
+
18
+
19
+ def predict(input_dir: str, output_dir: str, disable_cuda: bool, device: str,
20
+ huge_structure: bool, restore_blocks_py: bool, trained_model_dirs: Union[str, List[str]]):
21
+ atom_num_orbital = load_orbital_types(os.path.join(input_dir, 'orbital_types.dat'))
22
+ if isinstance(trained_model_dirs, str):
23
+ trained_model_dirs = [trained_model_dirs]
24
+ assert isinstance(trained_model_dirs, list)
25
+ os.makedirs(output_dir, exist_ok=True)
26
+ predict_spinful = None
27
+
28
+ with torch.no_grad():
29
+ read_structure_flag = False
30
+ if restore_blocks_py:
31
+ hoppings_pred = {}
32
+ else:
33
+ index_model = 0
34
+ block_without_restoration = {}
35
+ os.makedirs(os.path.join(output_dir, 'block_without_restoration'), exist_ok=True)
36
+ for trained_model_dir in tqdm.tqdm(trained_model_dirs):
37
+ old_version = False
38
+ assert os.path.exists(os.path.join(trained_model_dir, 'config.ini'))
39
+ if os.path.exists(os.path.join(trained_model_dir, 'best_model.pt')) is False:
40
+ old_version = True
41
+ assert os.path.exists(os.path.join(trained_model_dir, 'best_model.pkl'))
42
+ assert os.path.exists(os.path.join(trained_model_dir, 'src'))
43
+
44
+ config = ConfigParser()
45
+ config.read(os.path.join(os.path.dirname(os.path.dirname(__file__)), 'default.ini'))
46
+ config.read(os.path.join(trained_model_dir, 'config.ini'))
47
+ config.set('basic', 'save_dir', os.path.join(output_dir, 'pred_ham_std'))
48
+ config.set('basic', 'disable_cuda', str(disable_cuda))
49
+ config.set('basic', 'device', str(device))
50
+ config.set('basic', 'save_to_time_folder', 'False')
51
+ config.set('basic', 'tb_writer', 'False')
52
+ config.set('train', 'pretrained', '')
53
+ config.set('train', 'resume', '')
54
+
55
+ kernel = DeepHKernel(config)
56
+ if old_version is False:
57
+ checkpoint = kernel.build_model(trained_model_dir, old_version)
58
+ else:
59
+ warnings.warn('You are using the trained model with an old version')
60
+ checkpoint = torch.load(
61
+ os.path.join(trained_model_dir, 'best_model.pkl'),
62
+ map_location=kernel.device
63
+ )
64
+ for key in ['index_to_Z', 'Z_to_index', 'spinful']:
65
+ if key in checkpoint:
66
+ setattr(kernel, key, checkpoint[key])
67
+ if hasattr(kernel, 'index_to_Z') is False:
68
+ kernel.index_to_Z = torch.arange(config.getint('basic', 'max_element') + 1)
69
+ if hasattr(kernel, 'Z_to_index') is False:
70
+ kernel.Z_to_index = torch.arange(config.getint('basic', 'max_element') + 1)
71
+ if hasattr(kernel, 'spinful') is False:
72
+ kernel.spinful = False
73
+ kernel.num_species = len(kernel.index_to_Z)
74
+ print("=> load best checkpoint (epoch {})".format(checkpoint['epoch']))
75
+ print(f"=> Atomic types: {kernel.index_to_Z.tolist()}, "
76
+ f"spinful: {kernel.spinful}, the number of atomic types: {len(kernel.index_to_Z)}.")
77
+ kernel.build_model(trained_model_dir, old_version)
78
+ kernel.model.load_state_dict(checkpoint['state_dict'])
79
+
80
+ if predict_spinful is None:
81
+ predict_spinful = kernel.spinful
82
+ else:
83
+ assert predict_spinful == kernel.spinful, "Different models' spinful are not compatible"
84
+
85
+ if read_structure_flag is False:
86
+ read_structure_flag = True
87
+ structure = Structure(np.loadtxt(os.path.join(input_dir, 'lat.dat')).T,
88
+ np.loadtxt(os.path.join(input_dir, 'element.dat')),
89
+ np.loadtxt(os.path.join(input_dir, 'site_positions.dat')).T,
90
+ coords_are_cartesian=True,
91
+ to_unit_cell=False)
92
+ cart_coords = torch.tensor(structure.cart_coords, dtype=torch.get_default_dtype())
93
+ frac_coords = torch.tensor(structure.frac_coords, dtype=torch.get_default_dtype())
94
+ numbers = kernel.Z_to_index[torch.tensor(structure.atomic_numbers)]
95
+ structure.lattice.matrix.setflags(write=True)
96
+ lattice = torch.tensor(structure.lattice.matrix, dtype=torch.get_default_dtype())
97
+ inv_lattice = torch.inverse(lattice)
98
+
99
+ if os.path.exists(os.path.join(input_dir, 'graph.pkl')):
100
+ data = torch.load(os.path.join(input_dir, 'graph.pkl'))
101
+ print(f"Load processed graph from {os.path.join(input_dir, 'graph.pkl')}")
102
+ else:
103
+ begin = time.time()
104
+ data = get_graph(cart_coords, frac_coords, numbers, 0,
105
+ r=kernel.config.getfloat('graph', 'radius'),
106
+ max_num_nbr=kernel.config.getint('graph', 'max_num_nbr'),
107
+ numerical_tol=1e-8, lattice=lattice, default_dtype_torch=torch.get_default_dtype(),
108
+ tb_folder=input_dir, interface="h5_rc_only",
109
+ num_l=kernel.config.getint('network', 'num_l'),
110
+ create_from_DFT=kernel.config.getboolean('graph', 'create_from_DFT',
111
+ fallback=True),
112
+ if_lcmp_graph=kernel.config.getboolean('graph', 'if_lcmp_graph', fallback=True),
113
+ separate_onsite=kernel.separate_onsite,
114
+ target=kernel.config.get('basic', 'target'), huge_structure=huge_structure,
115
+ if_new_sp=kernel.config.getboolean('graph', 'new_sp', fallback=False),
116
+ )
117
+ torch.save(data, os.path.join(input_dir, 'graph.pkl'))
118
+ print(
119
+ f"Save processed graph to {os.path.join(input_dir, 'graph.pkl')}, cost {time.time() - begin} seconds")
120
+ batch, subgraph = collate_fn([data])
121
+ sub_atom_idx, sub_edge_idx, sub_edge_ang, sub_index = subgraph
122
+
123
+ output = kernel.model(batch.x.to(kernel.device), batch.edge_index.to(kernel.device),
124
+ batch.edge_attr.to(kernel.device),
125
+ batch.batch.to(kernel.device),
126
+ sub_atom_idx.to(kernel.device), sub_edge_idx.to(kernel.device),
127
+ sub_edge_ang.to(kernel.device), sub_index.to(kernel.device),
128
+ huge_structure=huge_structure)
129
+ output = output.detach().cpu()
130
+ if restore_blocks_py:
131
+ for index in range(batch.edge_attr.shape[0]):
132
+ R = torch.round(batch.edge_attr[index, 4:7] @ inv_lattice - batch.edge_attr[index, 7:10] @ inv_lattice).int().tolist()
133
+ i, j = batch.edge_index[:, index]
134
+ key_term = (*R, i.item() + 1, j.item() + 1)
135
+ key_term = str(list(key_term))
136
+ for index_orbital, orbital_dict in enumerate(kernel.orbital):
137
+ if f'{kernel.index_to_Z[numbers[i]].item()} {kernel.index_to_Z[numbers[j]].item()}' not in orbital_dict:
138
+ continue
139
+ orbital_i, orbital_j = orbital_dict[f'{kernel.index_to_Z[numbers[i]].item()} {kernel.index_to_Z[numbers[j]].item()}']
140
+
141
+ if not key_term in hoppings_pred:
142
+ if kernel.spinful:
143
+ hoppings_pred[key_term] = np.full((2 * atom_num_orbital[i], 2 * atom_num_orbital[j]), np.nan + np.nan * (1j))
144
+ else:
145
+ hoppings_pred[key_term] = np.full((atom_num_orbital[i], atom_num_orbital[j]), np.nan)
146
+ if kernel.spinful:
147
+ hoppings_pred[key_term][orbital_i, orbital_j] = output[index][index_orbital * 8 + 0] + output[index][index_orbital * 8 + 1] * 1j
148
+ hoppings_pred[key_term][atom_num_orbital[i] + orbital_i, atom_num_orbital[j] + orbital_j] = output[index][index_orbital * 8 + 2] + output[index][index_orbital * 8 + 3] * 1j
149
+ hoppings_pred[key_term][orbital_i, atom_num_orbital[j] + orbital_j] = output[index][index_orbital * 8 + 4] + output[index][index_orbital * 8 + 5] * 1j
150
+ hoppings_pred[key_term][atom_num_orbital[i] + orbital_i, orbital_j] = output[index][index_orbital * 8 + 6] + output[index][index_orbital * 8 + 7] * 1j
151
+ else:
152
+ hoppings_pred[key_term][orbital_i, orbital_j] = output[index][index_orbital] # about output shape w/ or w/o soc, see graph.py line 164, and kernel.py line 281.
153
+ else:
154
+ if 'edge_index' not in block_without_restoration:
155
+ assert index_model == 0
156
+ block_without_restoration['edge_index'] = batch.edge_index
157
+ block_without_restoration['edge_attr'] = batch.edge_attr
158
+ block_without_restoration[f'output_{index_model}'] = output.numpy()
159
+ with open(os.path.join(output_dir, 'block_without_restoration', f'orbital_{index_model}.json'), 'w') as orbital_f:
160
+ json.dump(kernel.orbital, orbital_f, indent=4)
161
+ index_model += 1
162
+ sys.stdout = sys.stdout.terminal
163
+ sys.stderr = sys.stderr.terminal
164
+
165
+ if restore_blocks_py:
166
+ for hamiltonian in hoppings_pred.values():
167
+ assert np.all(np.isnan(hamiltonian) == False)
168
+ write_ham_h5(hoppings_pred, path=os.path.join(output_dir, 'rh_pred.h5'))
169
+ else:
170
+ block_without_restoration['num_model'] = index_model
171
+ write_ham_h5(block_without_restoration, path=os.path.join(output_dir, 'block_without_restoration', 'block_without_restoration.h5'))
172
+ with open(os.path.join(output_dir, "info.json"), 'w') as info_f:
173
+ json.dump({
174
+ "isspinful": predict_spinful
175
+ }, info_f)
176
+
177
+
178
+ def predict_with_grad(input_dir: str, output_dir: str, disable_cuda: bool, device: str,
179
+ huge_structure: bool, trained_model_dirs: Union[str, List[str]]):
180
+ atom_num_orbital, orbital_types = load_orbital_types(os.path.join(input_dir, 'orbital_types.dat'), return_orbital_types=True)
181
+
182
+ if isinstance(trained_model_dirs, str):
183
+ trained_model_dirs = [trained_model_dirs]
184
+ assert isinstance(trained_model_dirs, list)
185
+ os.makedirs(output_dir, exist_ok=True)
186
+ predict_spinful = None
187
+
188
+ read_structure_flag = False
189
+ rh_dict = {}
190
+ hamiltonians_pred = {}
191
+ hamiltonians_grad_pred = {}
192
+
193
+ for trained_model_dir in tqdm.tqdm(trained_model_dirs):
194
+ old_version = False
195
+ assert os.path.exists(os.path.join(trained_model_dir, 'config.ini'))
196
+ if os.path.exists(os.path.join(trained_model_dir, 'best_model.pt')) is False:
197
+ old_version = True
198
+ assert os.path.exists(os.path.join(trained_model_dir, 'best_model.pkl'))
199
+ assert os.path.exists(os.path.join(trained_model_dir, 'src'))
200
+
201
+ config = ConfigParser()
202
+ config.read(os.path.join(os.path.dirname(os.path.dirname(__file__)), 'default.ini'))
203
+ config.read(os.path.join(trained_model_dir, 'config.ini'))
204
+ config.set('basic', 'save_dir', os.path.join(output_dir, 'pred_ham_std'))
205
+ config.set('basic', 'disable_cuda', str(disable_cuda))
206
+ config.set('basic', 'device', str(device))
207
+ config.set('basic', 'save_to_time_folder', 'False')
208
+ config.set('basic', 'tb_writer', 'False')
209
+ config.set('train', 'pretrained', '')
210
+ config.set('train', 'resume', '')
211
+
212
+ kernel = DeepHKernel(config)
213
+ if old_version is False:
214
+ checkpoint = kernel.build_model(trained_model_dir, old_version)
215
+ else:
216
+ warnings.warn('You are using the trained model with an old version')
217
+ checkpoint = torch.load(
218
+ os.path.join(trained_model_dir, 'best_model.pkl'),
219
+ map_location=kernel.device
220
+ )
221
+ for key in ['index_to_Z', 'Z_to_index', 'spinful']:
222
+ if key in checkpoint:
223
+ setattr(kernel, key, checkpoint[key])
224
+ if hasattr(kernel, 'index_to_Z') is False:
225
+ kernel.index_to_Z = torch.arange(config.getint('basic', 'max_element') + 1)
226
+ if hasattr(kernel, 'Z_to_index') is False:
227
+ kernel.Z_to_index = torch.arange(config.getint('basic', 'max_element') + 1)
228
+ if hasattr(kernel, 'spinful') is False:
229
+ kernel.spinful = False
230
+ kernel.num_species = len(kernel.index_to_Z)
231
+ print("=> load best checkpoint (epoch {})".format(checkpoint['epoch']))
232
+ print(f"=> Atomic types: {kernel.index_to_Z.tolist()}, "
233
+ f"spinful: {kernel.spinful}, the number of atomic types: {len(kernel.index_to_Z)}.")
234
+ kernel.build_model(trained_model_dir, old_version)
235
+ kernel.model.load_state_dict(checkpoint['state_dict'])
236
+
237
+ if predict_spinful is None:
238
+ predict_spinful = kernel.spinful
239
+ else:
240
+ assert predict_spinful == kernel.spinful, "Different models' spinful are not compatible"
241
+
242
+ if read_structure_flag is False:
243
+ read_structure_flag = True
244
+ structure = Structure(np.loadtxt(os.path.join(input_dir, 'lat.dat')).T,
245
+ np.loadtxt(os.path.join(input_dir, 'element.dat')),
246
+ np.loadtxt(os.path.join(input_dir, 'site_positions.dat')).T,
247
+ coords_are_cartesian=True,
248
+ to_unit_cell=False)
249
+ cart_coords = torch.tensor(structure.cart_coords, dtype=torch.get_default_dtype(), requires_grad=True, device=kernel.device)
250
+ num_atom = cart_coords.shape[0]
251
+ frac_coords = torch.tensor(structure.frac_coords, dtype=torch.get_default_dtype())
252
+ numbers = kernel.Z_to_index[torch.tensor(structure.atomic_numbers)]
253
+ structure.lattice.matrix.setflags(write=True)
254
+ lattice = torch.tensor(structure.lattice.matrix, dtype=torch.get_default_dtype())
255
+ inv_lattice = torch.inverse(lattice)
256
+
257
+ fid_rc = get_rc(input_dir, None, radius=-1, create_from_DFT=True, if_require_grad=True, cart_coords=cart_coords)
258
+
259
+ assert kernel.config.getboolean('graph', 'new_sp', fallback=False)
260
+ data = get_graph(cart_coords.to(kernel.device), frac_coords, numbers, 0,
261
+ r=kernel.config.getfloat('graph', 'radius'),
262
+ max_num_nbr=kernel.config.getint('graph', 'max_num_nbr'),
263
+ numerical_tol=1e-8, lattice=lattice, default_dtype_torch=torch.get_default_dtype(),
264
+ tb_folder=input_dir, interface="h5_rc_only",
265
+ num_l=kernel.config.getint('network', 'num_l'),
266
+ create_from_DFT=kernel.config.getboolean('graph', 'create_from_DFT', fallback=True),
267
+ if_lcmp_graph=kernel.config.getboolean('graph', 'if_lcmp_graph', fallback=True),
268
+ separate_onsite=kernel.separate_onsite,
269
+ target=kernel.config.get('basic', 'target'), huge_structure=huge_structure,
270
+ if_new_sp=True, if_require_grad=True, fid_rc=fid_rc)
271
+ batch, subgraph = collate_fn([data])
272
+ sub_atom_idx, sub_edge_idx, sub_edge_ang, sub_index = subgraph
273
+
274
+ torch_dtype, torch_dtype_real, torch_dtype_complex = dtype_dict[torch.get_default_dtype()]
275
+ rotate_kernel = Rotate(torch_dtype, torch_dtype_real=torch_dtype_real,
276
+ torch_dtype_complex=torch_dtype_complex,
277
+ device=kernel.device, spinful=kernel.spinful)
278
+
279
+ output = kernel.model(batch.x, batch.edge_index.to(kernel.device),
280
+ batch.edge_attr,
281
+ batch.batch.to(kernel.device),
282
+ sub_atom_idx.to(kernel.device), sub_edge_idx.to(kernel.device),
283
+ sub_edge_ang, sub_index.to(kernel.device),
284
+ huge_structure=huge_structure)
285
+
286
+ index_for_matrix_block_real_dict = {} # key is atomic number pair
287
+ if kernel.spinful:
288
+ index_for_matrix_block_imag_dict = {} # key is atomic number pair
289
+
290
+ for index in range(batch.edge_attr.shape[0]):
291
+ R = torch.round(batch.edge_attr[index, 4:7].cpu() @ inv_lattice - batch.edge_attr[index, 7:10].cpu() @ inv_lattice).int().tolist()
292
+ i, j = batch.edge_index[:, index]
293
+ key_tensor = torch.tensor([*R, i, j])
294
+ numbers_pair = (kernel.index_to_Z[numbers[i]].item(), kernel.index_to_Z[numbers[j]].item())
295
+ if numbers_pair not in index_for_matrix_block_real_dict:
296
+ if not kernel.spinful:
297
+ index_for_matrix_block_real = torch.full((atom_num_orbital[i], atom_num_orbital[j]), -1)
298
+ else:
299
+ index_for_matrix_block_real = torch.full((2 * atom_num_orbital[i], 2 * atom_num_orbital[j]), -1)
300
+ index_for_matrix_block_imag = torch.full((2 * atom_num_orbital[i], 2 * atom_num_orbital[j]), -1)
301
+ for index_orbital, orbital_dict in enumerate(kernel.orbital):
302
+ if f'{kernel.index_to_Z[numbers[i]].item()} {kernel.index_to_Z[numbers[j]].item()}' not in orbital_dict:
303
+ continue
304
+ orbital_i, orbital_j = orbital_dict[f'{kernel.index_to_Z[numbers[i]].item()} {kernel.index_to_Z[numbers[j]].item()}']
305
+ if not kernel.spinful:
306
+ index_for_matrix_block_real[orbital_i, orbital_j] = index_orbital
307
+ else:
308
+ index_for_matrix_block_real[orbital_i, orbital_j] = index_orbital * 8 + 0
309
+ index_for_matrix_block_imag[orbital_i, orbital_j] = index_orbital * 8 + 1
310
+ index_for_matrix_block_real[atom_num_orbital[i] + orbital_i, atom_num_orbital[j] + orbital_j] = index_orbital * 8 + 2
311
+ index_for_matrix_block_imag[atom_num_orbital[i] + orbital_i, atom_num_orbital[j] + orbital_j] = index_orbital * 8 + 3
312
+ index_for_matrix_block_real[orbital_i, atom_num_orbital[j] + orbital_j] = index_orbital * 8 + 4
313
+ index_for_matrix_block_imag[orbital_i, atom_num_orbital[j] + orbital_j] = index_orbital * 8 + 5
314
+ index_for_matrix_block_real[atom_num_orbital[i] + orbital_i, orbital_j] = index_orbital * 8 + 6
315
+ index_for_matrix_block_imag[atom_num_orbital[i] + orbital_i, orbital_j] = index_orbital * 8 + 7
316
+ assert torch.all(index_for_matrix_block_real != -1), 'json string "orbital" should be complete for Hamiltonian grad'
317
+ if kernel.spinful:
318
+ assert torch.all(index_for_matrix_block_imag != -1), 'json string "orbital" should be complete for Hamiltonian grad'
319
+
320
+ index_for_matrix_block_real_dict[numbers_pair] = index_for_matrix_block_real
321
+ if kernel.spinful:
322
+ index_for_matrix_block_imag_dict[numbers_pair] = index_for_matrix_block_imag
323
+ else:
324
+ index_for_matrix_block_real = index_for_matrix_block_real_dict[numbers_pair]
325
+ if kernel.spinful:
326
+ index_for_matrix_block_imag = index_for_matrix_block_imag_dict[numbers_pair]
327
+
328
+ if not kernel.spinful:
329
+ rh_dict[key_tensor] = output[index][index_for_matrix_block_real]
330
+ else:
331
+ rh_dict[key_tensor] = output[index][index_for_matrix_block_real] + 1j * output[index][index_for_matrix_block_imag]
332
+
333
+ sys.stdout = sys.stdout.terminal
334
+ sys.stderr = sys.stderr.terminal
335
+
336
+ print("=> Hamiltonian has been predicted, calculate the grad...")
337
+ for key_tensor, rotated_hamiltonian in tqdm.tqdm(rh_dict.items()):
338
+ atom_i = key_tensor[3]
339
+ atom_j = key_tensor[4]
340
+ assert atom_i >= 0
341
+ assert atom_i < num_atom
342
+ assert atom_j >= 0
343
+ assert atom_j < num_atom
344
+ key_str = str(list([key_tensor[0].item(), key_tensor[1].item(), key_tensor[2].item(), atom_i.item() + 1, atom_j.item() + 1]))
345
+ assert key_str in fid_rc, f'Can not found the key "{key_str}" in rc.h5'
346
+ # rotation_matrix = torch.tensor(fid_rc[key_str], dtype=torch_dtype_real, device=kernel.device).T
347
+ rotation_matrix = fid_rc[key_str].T
348
+ hamiltonian = rotate_kernel.rotate_openmx_H(rotated_hamiltonian, rotation_matrix, orbital_types[atom_i], orbital_types[atom_j])
349
+ hamiltonians_pred[key_str] = hamiltonian.detach().cpu()
350
+ assert kernel.spinful is False # 检查soc时是否正确
351
+ assert len(hamiltonian.shape) == 2
352
+ dim_1, dim_2 = hamiltonian.shape[:]
353
+ assert key_str not in hamiltonians_grad_pred
354
+ if not kernel.spinful:
355
+ hamiltonians_grad_pred[key_str] = np.full((dim_1, dim_2, num_atom, 3), np.nan)
356
+ else:
357
+ hamiltonians_grad_pred[key_str] = np.full((2 * dim_1, 2 * dim_2, num_atom, 3), np.nan + 1j * np.nan)
358
+
359
+ write_ham_h5(hamiltonians_pred, path=os.path.join(output_dir, 'hamiltonians_pred.h5'))
360
+ write_ham_h5(hamiltonians_grad_pred, path=os.path.join(output_dir, 'hamiltonians_grad_pred.h5'))
361
+ with open(os.path.join(output_dir, "info.json"), 'w') as info_f:
362
+ json.dump({
363
+ "isspinful": predict_spinful
364
+ }, info_f)
365
+ fid_rc.close()
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/restore_blocks.jl ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ using JSON
2
+ using LinearAlgebra
3
+ using DelimitedFiles
4
+ using HDF5
5
+ using ArgParse
6
+
7
+
8
+ function parse_commandline()
9
+ s = ArgParseSettings()
10
+ @add_arg_table! s begin
11
+ "--input_dir", "-i"
12
+ help = "path of block_without_restoration, element.dat, site_positions.dat, orbital_types.dat, and info.json"
13
+ arg_type = String
14
+ default = "./"
15
+ "--output_dir", "-o"
16
+ help = "path of output rh_pred.h5"
17
+ arg_type = String
18
+ default = "./"
19
+ end
20
+ return parse_args(s)
21
+ end
22
+ parsed_args = parse_commandline()
23
+
24
+
25
+ function _create_dict_h5(filename::String)
26
+ fid = h5open(filename, "r")
27
+ T = eltype(fid[keys(fid)[1]])
28
+ d_out = Dict{Array{Int64,1}, Array{T, 2}}()
29
+ for key in keys(fid)
30
+ data = read(fid[key])
31
+ nk = map(x -> parse(Int64, convert(String, x)), split(key[2 : length(key) - 1], ','))
32
+ d_out[nk] = permutedims(data)
33
+ end
34
+ close(fid)
35
+ return d_out
36
+ end
37
+
38
+
39
+ if isfile(joinpath(parsed_args["input_dir"],"info.json"))
40
+ spinful = JSON.parsefile(joinpath(parsed_args["input_dir"],"info.json"))["isspinful"]
41
+ else
42
+ spinful = false
43
+ end
44
+
45
+ spinful = JSON.parsefile(joinpath(parsed_args["input_dir"],"info.json"))["isspinful"]
46
+ numbers = readdlm(joinpath(parsed_args["input_dir"], "element.dat"), Int64)
47
+ lattice = readdlm(joinpath(parsed_args["input_dir"], "lat.dat"))
48
+ inv_lattice = inv(lattice)
49
+ site_positions = readdlm(joinpath(parsed_args["input_dir"], "site_positions.dat"))
50
+ nsites = size(site_positions, 2)
51
+ orbital_types_f = open(joinpath(parsed_args["input_dir"], "orbital_types.dat"), "r")
52
+ site_norbits = zeros(nsites)
53
+ orbital_types = Vector{Vector{Int64}}()
54
+ for index_site = 1:nsites
55
+ orbital_type = parse.(Int64, split(readline(orbital_types_f)))
56
+ push!(orbital_types, orbital_type)
57
+ end
58
+ site_norbits = (x->sum(x .* 2 .+ 1)).(orbital_types) * (1 + spinful)
59
+ atom_num_orbital = (x->sum(x .* 2 .+ 1)).(orbital_types)
60
+
61
+ fid = h5open(joinpath(parsed_args["input_dir"], "block_without_restoration", "block_without_restoration.h5"), "r")
62
+ num_model = read(fid["num_model"])
63
+ T_pytorch = eltype(fid["output_0"])
64
+ if spinful
65
+ T_Hamiltonian = Complex{T_pytorch}
66
+ else
67
+ T_Hamiltonian = T_pytorch
68
+ end
69
+ hoppings_pred = Dict{Array{Int64,1}, Array{T_Hamiltonian, 2}}()
70
+ println("Found $num_model models, spinful:$spinful")
71
+ edge_attr = read(fid["edge_attr"])
72
+ edge_index = read(fid["edge_index"])
73
+ for index_model in 0:(num_model-1)
74
+ output = read(fid["output_$index_model"])
75
+ orbital = JSON.parsefile(joinpath(parsed_args["input_dir"], "block_without_restoration", "orbital_$index_model.json"))
76
+ orbital = convert(Vector{Dict{String, Vector{Int}}}, orbital)
77
+ for index in 1:size(edge_index, 1)
78
+ R = Int.(round.(inv_lattice * edge_attr[5:7, index] - inv_lattice * edge_attr[8:10, index]))
79
+ i = edge_index[index, 1] + 1
80
+ j = edge_index[index, 2] + 1
81
+ key_term = cat(R, i, j, dims=1)
82
+ for (index_orbital, orbital_dict) in enumerate(orbital)
83
+ atomic_number_pair = "$(numbers[i]) $(numbers[j])"
84
+ if !(atomic_number_pair ∈ keys(orbital_dict))
85
+ continue
86
+ end
87
+ orbital_i, orbital_j = orbital_dict[atomic_number_pair]
88
+ orbital_i += 1
89
+ orbital_j += 1
90
+
91
+ if !(key_term ∈ keys(hoppings_pred))
92
+ if spinful
93
+ hoppings_pred[key_term] = fill(NaN + NaN * im, 2 * atom_num_orbital[i], 2 * atom_num_orbital[j])
94
+ else
95
+ hoppings_pred[key_term] = fill(NaN, atom_num_orbital[i], atom_num_orbital[j])
96
+ end
97
+ end
98
+ if spinful
99
+ hoppings_pred[key_term][orbital_i, orbital_j] = output[index_orbital * 8 - 7, index] + output[index_orbital * 8 - 6, index] * im
100
+ hoppings_pred[key_term][atom_num_orbital[i] + orbital_i, atom_num_orbital[j] + orbital_j] = output[index_orbital * 8 - 5, index] + output[index_orbital * 8 - 4, index] * im
101
+ hoppings_pred[key_term][orbital_i, atom_num_orbital[j] + orbital_j] = output[index_orbital * 8 - 3, index] + output[index_orbital * 8 - 2, index] * im
102
+ hoppings_pred[key_term][atom_num_orbital[i] + orbital_i, orbital_j] = output[index_orbital * 8 - 1, index] + output[index_orbital * 8, index] * im
103
+ else
104
+ hoppings_pred[key_term][orbital_i, orbital_j] = output[index_orbital, index]
105
+ end
106
+ end
107
+ end
108
+ end
109
+ close(fid)
110
+
111
+ h5open(joinpath(parsed_args["output_dir"], "rh_pred.h5"), "w") do fid
112
+ for (key, rh_pred) in hoppings_pred
113
+ write(fid, string(key), permutedims(rh_pred))
114
+ end
115
+ end
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/inference/sparse_calc.jl ADDED
@@ -0,0 +1,412 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ using DelimitedFiles, LinearAlgebra, JSON
2
+ using HDF5
3
+ using ArgParse
4
+ using SparseArrays
5
+ using Pardiso, Arpack, LinearMaps
6
+ using JLD
7
+ # BLAS.set_num_threads(1)
8
+
9
+ const ev2Hartree = 0.036749324533634074
10
+ const Bohr2Ang = 0.529177249
11
+ const default_dtype = Complex{Float64}
12
+
13
+
14
+ function parse_commandline()
15
+ s = ArgParseSettings()
16
+ @add_arg_table! s begin
17
+ "--input_dir", "-i"
18
+ help = "path of rlat.dat, orbital_types.dat, site_positions.dat, hamiltonians_pred.h5, and overlaps.h5"
19
+ arg_type = String
20
+ default = "./"
21
+ "--output_dir", "-o"
22
+ help = "path of output openmx.Band"
23
+ arg_type = String
24
+ default = "./"
25
+ "--config"
26
+ help = "config file in the format of JSON"
27
+ arg_type = String
28
+ "--ill_project"
29
+ help = "projects out the eigenvectors of the overlap matrix that correspond to eigenvalues smaller than ill_threshold"
30
+ arg_type = Bool
31
+ default = true
32
+ "--ill_threshold"
33
+ help = "threshold for ill_project"
34
+ arg_type = Float64
35
+ default = 5e-4
36
+ end
37
+ return parse_args(s)
38
+ end
39
+
40
+
41
+ function _create_dict_h5(filename::String)
42
+ fid = h5open(filename, "r")
43
+ T = eltype(fid[keys(fid)[1]])
44
+ d_out = Dict{Array{Int64,1}, Array{T, 2}}()
45
+ for key in keys(fid)
46
+ data = read(fid[key])
47
+ nk = map(x -> parse(Int64, convert(String, x)), split(key[2 : length(key) - 1], ','))
48
+ d_out[nk] = permutedims(data)
49
+ end
50
+ close(fid)
51
+ return d_out
52
+ end
53
+
54
+
55
+ # The function construct_linear_map below is come from https://discourse.julialang.org/t/smallest-magnitude-eigenvalues-of-the-generalized-eigenvalue-equation-for-a-large-sparse-matrix/75485/11
56
+ function construct_linear_map(H, S)
57
+ ps = MKLPardisoSolver()
58
+ set_matrixtype!(ps, Pardiso.COMPLEX_HERM_INDEF)
59
+ pardisoinit(ps)
60
+ fix_iparm!(ps, :N)
61
+ H_pardiso = get_matrix(ps, H, :N)
62
+ b = rand(ComplexF64, size(H, 1))
63
+ set_phase!(ps, Pardiso.ANALYSIS)
64
+ pardiso(ps, H_pardiso, b)
65
+ set_phase!(ps, Pardiso.NUM_FACT)
66
+ pardiso(ps, H_pardiso, b)
67
+ return (
68
+ LinearMap{ComplexF64}(
69
+ (y, x) -> begin
70
+ set_phase!(ps, Pardiso.SOLVE_ITERATIVE_REFINE)
71
+ pardiso(ps, y, H_pardiso, S * x)
72
+ end,
73
+ size(H, 1);
74
+ ismutating=true
75
+ ),
76
+ ps
77
+ )
78
+ end
79
+
80
+
81
+ function genlist(x)
82
+ return collect(range(x[1], stop = x[2], length = Int64(x[3])))
83
+ end
84
+
85
+
86
+ function k_data2num_ks(kdata::AbstractString)
87
+ return parse(Int64,split(kdata)[1])
88
+ end
89
+
90
+
91
+ function k_data2kpath(kdata::AbstractString)
92
+ return map(x->parse(Float64,x), split(kdata)[2:7])
93
+ end
94
+
95
+
96
+ function std_out_array(a::AbstractArray)
97
+ return string(map(x->string(x," "),a)...)
98
+ end
99
+
100
+
101
+ function constructmeshkpts(nkmesh::Vector{Int64}; offset::Vector{Float64}=[0.0, 0.0, 0.0],
102
+ k1::Vector{Float64}=[0.0, 0.0, 0.0], k2::Vector{Float64}=[1.0, 1.0, 1.0])
103
+ length(nkmesh) == 3 || throw(ArgumentError("nkmesh in wrong size."))
104
+ nkpts = prod(nkmesh)
105
+ kpts = zeros(3, nkpts)
106
+ ik = 1
107
+ for ikx in 1:nkmesh[1], iky in 1:nkmesh[2], ikz in 1:nkmesh[3]
108
+ kpts[:, ik] = [
109
+ (ikx-1)/nkmesh[1]*(k2[1]-k1[1])+k1[1],
110
+ (iky-1)/nkmesh[2]*(k2[2]-k1[2])+k1[2],
111
+ (ikz-1)/nkmesh[3]*(k2[3]-k1[3])+k1[3]
112
+ ]
113
+ ik += 1
114
+ end
115
+ return kpts.+offset
116
+ end
117
+
118
+
119
+ function main()
120
+ parsed_args = parse_commandline()
121
+
122
+ println(parsed_args["config"])
123
+ config = JSON.parsefile(parsed_args["config"])
124
+ calc_job = config["calc_job"]
125
+ ill_project = parsed_args["ill_project"]
126
+ ill_threshold = parsed_args["ill_threshold"]
127
+
128
+ if isfile(joinpath(parsed_args["input_dir"],"info.json"))
129
+ spinful = JSON.parsefile(joinpath(parsed_args["input_dir"],"info.json"))["isspinful"]
130
+ else
131
+ spinful = false
132
+ end
133
+
134
+ site_positions = readdlm(joinpath(parsed_args["input_dir"], "site_positions.dat"))
135
+ nsites = size(site_positions, 2)
136
+
137
+ orbital_types_f = open(joinpath(parsed_args["input_dir"], "orbital_types.dat"), "r")
138
+ site_norbits = zeros(nsites)
139
+ orbital_types = Vector{Vector{Int64}}()
140
+ for index_site = 1:nsites
141
+ orbital_type = parse.(Int64, split(readline(orbital_types_f)))
142
+ push!(orbital_types, orbital_type)
143
+ end
144
+ site_norbits = (x->sum(x .* 2 .+ 1)).(orbital_types) * (1 + spinful)
145
+ norbits = sum(site_norbits)
146
+ site_norbits_cumsum = cumsum(site_norbits)
147
+
148
+ rlat = readdlm(joinpath(parsed_args["input_dir"], "rlat.dat"))
149
+
150
+
151
+ if isfile(joinpath(parsed_args["input_dir"], "sparse_matrix.jld"))
152
+ @info string("read sparse matrix from ", parsed_args["input_dir"], "/sparse_matrix.jld")
153
+ H_R = load(joinpath(parsed_args["input_dir"], "sparse_matrix.jld"), "H_R")
154
+ S_R = load(joinpath(parsed_args["input_dir"], "sparse_matrix.jld"), "S_R")
155
+ else
156
+ @info "read h5"
157
+ begin_time = time()
158
+ hamiltonians_pred = _create_dict_h5(joinpath(parsed_args["input_dir"], "hamiltonians_pred.h5"))
159
+ overlaps = _create_dict_h5(joinpath(parsed_args["input_dir"], "overlaps.h5"))
160
+ println("Time for reading h5: ", time() - begin_time, "s")
161
+
162
+ I_R = Dict{Vector{Int64}, Vector{Int64}}()
163
+ J_R = Dict{Vector{Int64}, Vector{Int64}}()
164
+ H_V_R = Dict{Vector{Int64}, Vector{default_dtype}}()
165
+ S_V_R = Dict{Vector{Int64}, Vector{default_dtype}}()
166
+
167
+ @info "construct sparse matrix in the format of COO"
168
+ begin_time = time()
169
+ for key in collect(keys(hamiltonians_pred))
170
+ hamiltonian_pred = hamiltonians_pred[key]
171
+ if (key ∈ keys(overlaps))
172
+ overlap = overlaps[key]
173
+ if spinful
174
+ overlap = vcat(hcat(overlap,zeros(size(overlap))),hcat(zeros(size(overlap)),overlap)) # the readout overlap matrix only contains the upper-left block # TODO maybe drop the zeros?
175
+ end
176
+ else
177
+ # continue
178
+ overlap = zero(hamiltonian_pred)
179
+ end
180
+ R = key[1:3]; atom_i=key[4]; atom_j=key[5]
181
+
182
+ @assert (site_norbits[atom_i], site_norbits[atom_j]) == size(hamiltonian_pred)
183
+ @assert (site_norbits[atom_i], site_norbits[atom_j]) == size(overlap)
184
+ if !(R ∈ keys(I_R))
185
+ I_R[R] = Vector{Int64}()
186
+ J_R[R] = Vector{Int64}()
187
+ H_V_R[R] = Vector{default_dtype}()
188
+ S_V_R[R] = Vector{default_dtype}()
189
+ end
190
+ for block_matrix_i in 1:site_norbits[atom_i]
191
+ for block_matrix_j in 1:site_norbits[atom_j]
192
+ coo_i = site_norbits_cumsum[atom_i] - site_norbits[atom_i] + block_matrix_i
193
+ coo_j = site_norbits_cumsum[atom_j] - site_norbits[atom_j] + block_matrix_j
194
+ push!(I_R[R], coo_i)
195
+ push!(J_R[R], coo_j)
196
+ push!(H_V_R[R], hamiltonian_pred[block_matrix_i, block_matrix_j])
197
+ push!(S_V_R[R], overlap[block_matrix_i, block_matrix_j])
198
+ end
199
+ end
200
+ end
201
+ println("Time for constructing sparse matrix in the format of COO: ", time() - begin_time, "s")
202
+
203
+ @info "convert sparse matrix to the format of CSC"
204
+ begin_time = time()
205
+ H_R = Dict{Vector{Int64}, SparseMatrixCSC{default_dtype, Int64}}()
206
+ S_R = Dict{Vector{Int64}, SparseMatrixCSC{default_dtype, Int64}}()
207
+
208
+ for R in keys(I_R)
209
+ H_R[R] = sparse(I_R[R], J_R[R], H_V_R[R], norbits, norbits)
210
+ S_R[R] = sparse(I_R[R], J_R[R], S_V_R[R], norbits, norbits)
211
+ end
212
+ println("Time for converting to the format of CSC: ", time() - begin_time, "s")
213
+
214
+ save(joinpath(parsed_args["input_dir"], "sparse_matrix.jld"), "H_R", H_R, "S_R", S_R)
215
+ end
216
+
217
+ if calc_job == "band"
218
+ which_k = config["which_k"] # which k point to calculate, start counting from 1, 0 for all k points
219
+ fermi_level = config["fermi_level"]
220
+ max_iter = config["max_iter"]
221
+ num_band = config["num_band"]
222
+ k_data = config["k_data"]
223
+
224
+ @info "calculate bands"
225
+ num_ks = k_data2num_ks.(k_data)
226
+ kpaths = k_data2kpath.(k_data)
227
+
228
+ egvals = zeros(Float64, num_band, sum(num_ks)[1])
229
+
230
+ begin_time = time()
231
+ idx_k = 1
232
+ for i = 1:size(kpaths, 1)
233
+ kpath = kpaths[i]
234
+ pnkpts = num_ks[i]
235
+ kxs = LinRange(kpath[1], kpath[4], pnkpts)
236
+ kys = LinRange(kpath[2], kpath[5], pnkpts)
237
+ kzs = LinRange(kpath[3], kpath[6], pnkpts)
238
+ for (kx, ky, kz) in zip(kxs, kys, kzs)
239
+ if which_k == 0 || which_k == idx_k
240
+ H_k = spzeros(default_dtype, norbits, norbits)
241
+ S_k = spzeros(default_dtype, norbits, norbits)
242
+ for R in keys(H_R)
243
+ H_k += H_R[R] * exp(im*2π*([kx, ky, kz]⋅R))
244
+ S_k += S_R[R] * exp(im*2π*([kx, ky, kz]⋅R))
245
+ end
246
+ S_k = (S_k + S_k') / 2
247
+ H_k = (H_k + H_k') / 2
248
+ if ill_project
249
+ lm, ps = construct_linear_map(Hermitian(H_k) - (fermi_level) * Hermitian(S_k), Hermitian(S_k))
250
+ println("Time for No.$idx_k matrix factorization: ", time() - begin_time, "s")
251
+ egval_sub_inv, egvec_sub = eigs(lm, nev=num_band, which=:LM, ritzvec=true, maxiter=max_iter)
252
+ set_phase!(ps, Pardiso.RELEASE_ALL)
253
+ pardiso(ps)
254
+ egval_sub = real(1 ./ egval_sub_inv) .+ (fermi_level)
255
+
256
+ # orthogonalize the eigenvectors
257
+ egvec_sub_qr = qr(egvec_sub)
258
+ egvec_sub = convert(Matrix{default_dtype}, egvec_sub_qr.Q)
259
+
260
+ S_k_sub = egvec_sub' * S_k * egvec_sub
261
+ (egval_S, egvec_S) = eigen(Hermitian(S_k_sub))
262
+ # egvec_S: shape (num_basis, num_bands)
263
+ project_index = abs.(egval_S) .> ill_threshold
264
+ if sum(project_index) != length(project_index)
265
+ H_k_sub = egvec_sub' * H_k * egvec_sub
266
+ egvec_S = egvec_S[:, project_index]
267
+ @warn "ill-conditioned eigenvalues detected, projected out $(length(project_index) - sum(project_index)) eigenvalues"
268
+ H_k_sub = egvec_S' * H_k_sub * egvec_S
269
+ S_k_sub = egvec_S' * S_k_sub * egvec_S
270
+ (egval, egvec) = eigen(Hermitian(H_k_sub), Hermitian(S_k_sub))
271
+ egval = vcat(egval, fill(1e4, length(project_index) - sum(project_index)))
272
+ egvec = egvec_S * egvec
273
+ egvec = egvec_sub * egvec
274
+ else
275
+ egval = egval_sub
276
+ end
277
+ else
278
+ lm, ps = construct_linear_map(Hermitian(H_k) - (fermi_level) * Hermitian(S_k), Hermitian(S_k))
279
+ println("Time for No.$idx_k matrix factorization: ", time() - begin_time, "s")
280
+ egval_inv, egvec = eigs(lm, nev=num_band, which=:LM, ritzvec=false, maxiter=max_iter)
281
+ set_phase!(ps, Pardiso.RELEASE_ALL)
282
+ pardiso(ps)
283
+ egval = real(1 ./ egval_inv) .+ (fermi_level)
284
+ # egval = real(eigs(H_k, S_k, nev=num_band, sigma=(fermi_level + lowest_band), which=:LR, ritzvec=false, maxiter=max_iter)[1])
285
+ end
286
+ egvals[:, idx_k] = egval
287
+ if which_k == 0
288
+ # println(egval .- fermi_level)
289
+ else
290
+ open(joinpath(parsed_args["output_dir"], "kpoint.dat"), "w") do f
291
+ writedlm(f, [kx, ky, kz])
292
+ end
293
+ open(joinpath(parsed_args["output_dir"], "egval.dat"), "w") do f
294
+ writedlm(f, egval)
295
+ end
296
+ end
297
+ egvals[:, idx_k] = egval
298
+ println("Time for solving No.$idx_k eigenvalues at k = ", [kx, ky, kz], ": ", time() - begin_time, "s")
299
+ end
300
+ idx_k += 1
301
+ end
302
+ end
303
+
304
+ # output in openmx band format
305
+ f = open(joinpath(parsed_args["output_dir"], "openmx.Band"),"w")
306
+ println(f, num_band, " ", 0, " ", ev2Hartree * fermi_level)
307
+ openmx_rlat = reshape((rlat .* Bohr2Ang), 1, :)
308
+ println(f, std_out_array(openmx_rlat))
309
+ println(f, length(k_data))
310
+ for line in k_data
311
+ println(f,line)
312
+ end
313
+ idx_k = 1
314
+ for i = 1:size(kpaths, 1)
315
+ pnkpts = num_ks[i]
316
+ kstart = kpaths[i][1:3]
317
+ kend = kpaths[i][4:6]
318
+ k_list = zeros(Float64,pnkpts,3)
319
+ for alpha = 1:3
320
+ k_list[:,alpha] = genlist([kstart[alpha],kend[alpha],pnkpts])
321
+ end
322
+ for j = 1:pnkpts
323
+ kvec = k_list[j,:]
324
+ println(f, num_band, " ", std_out_array(kvec))
325
+ println(f, std_out_array(ev2Hartree * egvals[:, idx_k]))
326
+ idx_k += 1
327
+ end
328
+ end
329
+ close(f)
330
+ elseif calc_job == "dos"
331
+ fermi_level = config["fermi_level"]
332
+ max_iter = config["max_iter"]
333
+ num_band = config["num_band"]
334
+ nkmesh = convert(Array{Int64,1}, config["kmesh"])
335
+ ks = constructmeshkpts(nkmesh)
336
+ nks = size(ks, 2)
337
+
338
+ egvals = zeros(Float64, num_band, nks)
339
+ begin_time = time()
340
+ for idx_k in 1:nks
341
+ kx, ky, kz = ks[:, idx_k]
342
+
343
+ H_k = spzeros(default_dtype, norbits, norbits)
344
+ S_k = spzeros(default_dtype, norbits, norbits)
345
+ for R in keys(H_R)
346
+ H_k += H_R[R] * exp(im*2π*([kx, ky, kz]⋅R))
347
+ S_k += S_R[R] * exp(im*2π*([kx, ky, kz]⋅R))
348
+ end
349
+ S_k = (S_k + S_k') / 2
350
+ H_k = (H_k + H_k') / 2
351
+ if ill_project
352
+ lm, ps = construct_linear_map(Hermitian(H_k) - (fermi_level) * Hermitian(S_k), Hermitian(S_k))
353
+ println("Time for No.$idx_k matrix factorization: ", time() - begin_time, "s")
354
+ egval_sub_inv, egvec_sub = eigs(lm, nev=num_band, which=:LM, ritzvec=true, maxiter=max_iter)
355
+ set_phase!(ps, Pardiso.RELEASE_ALL)
356
+ pardiso(ps)
357
+ egval_sub = real(1 ./ egval_sub_inv) .+ (fermi_level)
358
+
359
+ # orthogonalize the eigenvectors
360
+ egvec_sub_qr = qr(egvec_sub)
361
+ egvec_sub = convert(Matrix{default_dtype}, egvec_sub_qr.Q)
362
+
363
+ S_k_sub = egvec_sub' * S_k * egvec_sub
364
+ (egval_S, egvec_S) = eigen(Hermitian(S_k_sub))
365
+ # egvec_S: shape (num_basis, num_bands)
366
+ project_index = abs.(egval_S) .> ill_threshold
367
+ if sum(project_index) != length(project_index)
368
+ H_k_sub = egvec_sub' * H_k * egvec_sub
369
+ egvec_S = egvec_S[:, project_index]
370
+ @warn "ill-conditioned eigenvalues detected, projected out $(length(project_index) - sum(project_index)) eigenvalues"
371
+ H_k_sub = egvec_S' * H_k_sub * egvec_S
372
+ S_k_sub = egvec_S' * S_k_sub * egvec_S
373
+ (egval, egvec) = eigen(Hermitian(H_k_sub), Hermitian(S_k_sub))
374
+ egval = vcat(egval, fill(1e4, length(project_index) - sum(project_index)))
375
+ egvec = egvec_S * egvec
376
+ egvec = egvec_sub * egvec
377
+ else
378
+ egval = egval_sub
379
+ end
380
+ else
381
+ lm, ps = construct_linear_map(Hermitian(H_k) - (fermi_level) * Hermitian(S_k), Hermitian(S_k))
382
+ println("Time for No.$idx_k matrix factorization: ", time() - begin_time, "s")
383
+ egval_inv, egvec = eigs(lm, nev=num_band, which=:LM, ritzvec=false, maxiter=max_iter)
384
+ set_phase!(ps, Pardiso.RELEASE_ALL)
385
+ pardiso(ps)
386
+ egval = real(1 ./ egval_inv) .+ (fermi_level)
387
+ # egval = real(eigs(H_k, S_k, nev=num_band, sigma=(fermi_level + lowest_band), which=:LR, ritzvec=false, maxiter=max_iter)[1])
388
+ end
389
+ egvals[:, idx_k] = egval
390
+ println("Time for solving No.$idx_k eigenvalues at k = ", [kx, ky, kz], ": ", time() - begin_time, "s")
391
+ end
392
+
393
+ open(joinpath(parsed_args["output_dir"], "egvals.dat"), "w") do f
394
+ writedlm(f, egvals)
395
+ end
396
+
397
+ ϵ = config["epsilon"]
398
+ ωs = genlist(config["omegas"])
399
+ nωs = length(ωs)
400
+ dos = zeros(nωs)
401
+ factor = 1/((2π)^3*ϵ*√π)
402
+ for idx_k in 1:nks, idx_band in 1:num_band, (idx_ω, ω) in enumerate(ωs)
403
+ dos[idx_ω] += exp(-(egvals[idx_band, idx_k] - ω - fermi_level) ^ 2 / ϵ ^ 2) * factor
404
+ end
405
+ open(joinpath(parsed_args["output_dir"], "dos.dat"), "w") do f
406
+ writedlm(f, [ωs dos])
407
+ end
408
+ end
409
+ end
410
+
411
+
412
+ main()
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/kernel.py ADDED
@@ -0,0 +1,844 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ from inspect import signature
4
+ import time
5
+ import csv
6
+ import sys
7
+ import shutil
8
+ import random
9
+ import warnings
10
+ from math import sqrt
11
+ from itertools import islice
12
+ from configparser import ConfigParser
13
+
14
+ import torch
15
+ import torch.optim as optim
16
+ from torch import package
17
+ from torch.nn import MSELoss
18
+ from torch.optim.lr_scheduler import MultiStepLR, ReduceLROnPlateau, CyclicLR
19
+ from torch.utils.data import SubsetRandomSampler, DataLoader
20
+ from torch.nn.utils import clip_grad_norm_
21
+ from torch.utils.tensorboard import SummaryWriter
22
+ from torch_scatter import scatter_add
23
+ import numpy as np
24
+ from psutil import cpu_count
25
+
26
+ from .data import HData
27
+ from .graph import Collater
28
+ from .utils import Logger, save_model, LossRecord, MaskMSELoss, Transform
29
+
30
+
31
+ class DeepHKernel:
32
+ def __init__(self, config: ConfigParser):
33
+ self.config = config
34
+
35
+ # basic config
36
+ if config.getboolean('basic', 'save_to_time_folder'):
37
+ config.set('basic', 'save_dir',
38
+ os.path.join(config.get('basic', 'save_dir'),
39
+ str(time.strftime('%Y-%m-%d_%H-%M-%S', time.localtime(time.time())))))
40
+ assert not os.path.exists(config.get('basic', 'save_dir'))
41
+ os.makedirs(config.get('basic', 'save_dir'), exist_ok=True)
42
+
43
+ sys.stdout = Logger(os.path.join(config.get('basic', 'save_dir'), "result.txt"))
44
+ sys.stderr = Logger(os.path.join(config.get('basic', 'save_dir'), "stderr.txt"))
45
+ self.if_tensorboard = config.getboolean('basic', 'tb_writer')
46
+ if self.if_tensorboard:
47
+ self.tb_writer = SummaryWriter(os.path.join(config.get('basic', 'save_dir'), "tensorboard"))
48
+ src_dir = os.path.join(config.get('basic', 'save_dir'), "src")
49
+ os.makedirs(src_dir, exist_ok=True)
50
+ try:
51
+ shutil.copytree(os.path.dirname(__file__), os.path.join(src_dir, 'deeph'))
52
+ except:
53
+ warnings.warn("Unable to copy scripts")
54
+ if not config.getboolean('basic', 'disable_cuda'):
55
+ self.device = torch.device(config.get('basic', 'device') if torch.cuda.is_available() else 'cpu')
56
+ else:
57
+ self.device = torch.device('cpu')
58
+ config.set('basic', 'device', str(self.device))
59
+ if config.get('hyperparameter', 'dtype') == 'float32':
60
+ default_dtype_torch = torch.float32
61
+ elif config.get('hyperparameter', 'dtype') == 'float16':
62
+ default_dtype_torch = torch.float16
63
+ elif config.get('hyperparameter', 'dtype') == 'float64':
64
+ default_dtype_torch = torch.float64
65
+ else:
66
+ raise ValueError('Unknown dtype: {}'.format(config.get('hyperparameter', 'dtype')))
67
+ np.seterr(all='raise')
68
+ np.seterr(under='warn')
69
+ np.set_printoptions(precision=8, linewidth=160)
70
+ torch.set_default_dtype(default_dtype_torch)
71
+ torch.set_printoptions(precision=8, linewidth=160, threshold=np.inf)
72
+ np.random.seed(config.getint('basic', 'seed'))
73
+ torch.manual_seed(config.getint('basic', 'seed'))
74
+ torch.cuda.manual_seed_all(config.getint('basic', 'seed'))
75
+ random.seed(config.getint('basic', 'seed'))
76
+ torch.backends.cudnn.benchmark = False
77
+ torch.backends.cudnn.deterministic = True
78
+ torch.cuda.empty_cache()
79
+
80
+ if config.getint('basic', 'num_threads', fallback=-1) == -1:
81
+ if torch.cuda.device_count() == 0:
82
+ torch.set_num_threads(cpu_count(logical=False))
83
+ else:
84
+ torch.set_num_threads(cpu_count(logical=False) // torch.cuda.device_count())
85
+ else:
86
+ torch.set_num_threads(config.getint('basic', 'num_threads'))
87
+
88
+ print('====== CONFIG ======')
89
+ for section_k, section_v in islice(config.items(), 1, None):
90
+ print(f'[{section_k}]')
91
+ for k, v in section_v.items():
92
+ print(f'{k}={v}')
93
+ print('')
94
+ config.write(open(os.path.join(config.get('basic', 'save_dir'), 'config.ini'), "w"))
95
+
96
+ self.if_lcmp = self.config.getboolean('network', 'if_lcmp', fallback=True)
97
+ self.if_lcmp_graph = self.config.getboolean('graph', 'if_lcmp_graph', fallback=True)
98
+ self.new_sp = self.config.getboolean('graph', 'new_sp', fallback=False)
99
+ self.separate_onsite = self.config.getboolean('graph', 'separate_onsite', fallback=False)
100
+ if self.if_lcmp == True:
101
+ assert self.if_lcmp_graph == True
102
+ self.target = self.config.get('basic', 'target')
103
+ if self.target == 'O_ij':
104
+ self.O_component = config['basic']['O_component']
105
+ if self.target != 'E_ij' and self.target != 'E_i':
106
+ self.orbital = json.loads(config.get('basic', 'orbital'))
107
+ self.num_orbital = len(self.orbital)
108
+ else:
109
+ self.energy_component = config['basic']['energy_component']
110
+ # early_stopping
111
+ self.early_stopping_loss_epoch = json.loads(self.config.get('train', 'early_stopping_loss_epoch'))
112
+
113
+ def build_model(self, model_pack_dir: str = None, old_version=None):
114
+ if model_pack_dir is not None:
115
+ assert old_version is not None
116
+ if old_version is True:
117
+ print(f'import HGNN from {model_pack_dir}')
118
+ sys.path.append(model_pack_dir)
119
+ from src.deeph import HGNN
120
+ else:
121
+ imp = package.PackageImporter(os.path.join(model_pack_dir, 'best_model.pt'))
122
+ checkpoint = imp.load_pickle('checkpoint', 'model.pkl', map_location=self.device)
123
+ self.model = checkpoint['model']
124
+ self.model.to(self.device)
125
+ self.index_to_Z = checkpoint["index_to_Z"]
126
+ self.Z_to_index = checkpoint["Z_to_index"]
127
+ self.spinful = checkpoint["spinful"]
128
+ print("=> load best checkpoint (epoch {})".format(checkpoint['epoch']))
129
+ print(f"=> Atomic types: {self.index_to_Z.tolist()}, "
130
+ f"spinful: {self.spinful}, the number of atomic types: {len(self.index_to_Z)}.")
131
+ if self.target != 'E_ij':
132
+ if self.spinful:
133
+ self.out_fea_len = self.num_orbital * 8
134
+ else:
135
+ self.out_fea_len = self.num_orbital
136
+ else:
137
+ if self.energy_component == 'both':
138
+ self.out_fea_len = 2
139
+ elif self.energy_component in ['xc', 'delta_ee', 'summation']:
140
+ self.out_fea_len = 1
141
+ else:
142
+ raise ValueError('Unknown energy_component: {}'.format(self.energy_component))
143
+ return checkpoint
144
+ else:
145
+ from .model import HGNN
146
+
147
+ if self.spinful:
148
+ if self.target == 'phiVdphi':
149
+ raise NotImplementedError("Not yet have support for phiVdphi")
150
+ else:
151
+ self.out_fea_len = self.num_orbital * 8
152
+ else:
153
+ if self.target == 'phiVdphi':
154
+ self.out_fea_len = self.num_orbital * 3
155
+ else:
156
+ self.out_fea_len = self.num_orbital
157
+
158
+ print(f'Output features length of single edge: {self.out_fea_len}')
159
+ model_kwargs = dict(
160
+ n_elements=self.num_species,
161
+ num_species=self.num_species,
162
+ in_atom_fea_len=self.config.getint('network', 'atom_fea_len'),
163
+ in_vfeats=self.config.getint('network', 'atom_fea_len'),
164
+ in_edge_fea_len=self.config.getint('network', 'edge_fea_len'),
165
+ in_efeats=self.config.getint('network', 'edge_fea_len'),
166
+ out_edge_fea_len=self.out_fea_len,
167
+ out_efeats=self.out_fea_len,
168
+ num_orbital=self.out_fea_len,
169
+ distance_expansion=self.config.get('network', 'distance_expansion'),
170
+ gauss_stop=self.config.getfloat('network', 'gauss_stop'),
171
+ cutoff=self.config.getfloat('network', 'gauss_stop'),
172
+ if_exp=self.config.getboolean('network', 'if_exp'),
173
+ if_MultipleLinear=self.config.getboolean('network', 'if_MultipleLinear'),
174
+ if_edge_update=self.config.getboolean('network', 'if_edge_update'),
175
+ if_lcmp=self.if_lcmp,
176
+ normalization=self.config.get('network', 'normalization'),
177
+ atom_update_net=self.config.get('network', 'atom_update_net', fallback='CGConv'),
178
+ separate_onsite=self.separate_onsite,
179
+ num_l=self.config.getint('network', 'num_l'),
180
+ trainable_gaussians=self.config.getboolean('network', 'trainable_gaussians', fallback=False),
181
+ type_affine=self.config.getboolean('network', 'type_affine', fallback=False),
182
+ if_fc_out=False,
183
+ )
184
+ parameter_list = list(signature(HGNN.__init__).parameters.keys())
185
+ current_parameter_list = list(model_kwargs.keys())
186
+ for k in current_parameter_list:
187
+ if k not in parameter_list:
188
+ model_kwargs.pop(k)
189
+ if 'num_elements' in parameter_list:
190
+ model_kwargs['num_elements'] = self.config.getint('basic', 'max_element') + 1
191
+ self.model = HGNN(
192
+ **model_kwargs
193
+ )
194
+
195
+ model_parameters = filter(lambda p: p.requires_grad, self.model.parameters())
196
+ params = sum([np.prod(p.size()) for p in model_parameters])
197
+ print("The model you built has: %d parameters" % params)
198
+ self.model.to(self.device)
199
+ self.load_pretrained()
200
+
201
+ def set_train(self):
202
+ self.criterion_name = self.config.get('hyperparameter', 'criterion', fallback='MaskMSELoss')
203
+ if self.target == "E_i":
204
+ self.criterion = MSELoss()
205
+ elif self.target == "E_ij":
206
+ self.criterion = MSELoss()
207
+ self.retain_edge_fea = self.config.getboolean('hyperparameter', 'retain_edge_fea')
208
+ self.lambda_Eij = self.config.getfloat('hyperparameter', 'lambda_Eij')
209
+ self.lambda_Ei = self.config.getfloat('hyperparameter', 'lambda_Ei')
210
+ self.lambda_Etot = self.config.getfloat('hyperparameter', 'lambda_Etot')
211
+ if self.retain_edge_fea is False:
212
+ assert self.lambda_Eij == 0.0
213
+ else:
214
+ if self.criterion_name == 'MaskMSELoss':
215
+ self.criterion = MaskMSELoss()
216
+ else:
217
+ raise ValueError(f'Unknown criterion: {self.criterion_name}')
218
+
219
+ learning_rate = self.config.getfloat('hyperparameter', 'learning_rate')
220
+ momentum = self.config.getfloat('hyperparameter', 'momentum')
221
+ weight_decay = self.config.getfloat('hyperparameter', 'weight_decay')
222
+
223
+ model_parameters = filter(lambda p: p.requires_grad, self.model.parameters())
224
+ if self.config.get('hyperparameter', 'optimizer') == 'sgd':
225
+ self.optimizer = optim.SGD(model_parameters, lr=learning_rate, weight_decay=weight_decay)
226
+ elif self.config.get('hyperparameter', 'optimizer') == 'sgdm':
227
+ self.optimizer = optim.SGD(model_parameters, lr=learning_rate, momentum=momentum, weight_decay=weight_decay)
228
+ elif self.config.get('hyperparameter', 'optimizer') == 'adam':
229
+ self.optimizer = optim.Adam(model_parameters, lr=learning_rate, betas=(0.9, 0.999))
230
+ elif self.config.get('hyperparameter', 'optimizer') == 'adamW':
231
+ self.optimizer = optim.AdamW(model_parameters, lr=learning_rate, betas=(0.9, 0.999))
232
+ elif self.config.get('hyperparameter', 'optimizer') == 'adagrad':
233
+ self.optimizer = optim.Adagrad(model_parameters, lr=learning_rate)
234
+ elif self.config.get('hyperparameter', 'optimizer') == 'RMSprop':
235
+ self.optimizer = optim.RMSprop(model_parameters, lr=learning_rate)
236
+ elif self.config.get('hyperparameter', 'optimizer') == 'lbfgs':
237
+ self.optimizer = optim.LBFGS(model_parameters, lr=0.1)
238
+ else:
239
+ raise ValueError(f'Unknown optimizer: {self.optimizer}')
240
+
241
+ if self.config.get('hyperparameter', 'lr_scheduler') == '':
242
+ pass
243
+ elif self.config.get('hyperparameter', 'lr_scheduler') == 'MultiStepLR':
244
+ lr_milestones = json.loads(self.config.get('hyperparameter', 'lr_milestones'))
245
+ self.scheduler = MultiStepLR(self.optimizer, milestones=lr_milestones, gamma=0.2)
246
+ elif self.config.get('hyperparameter', 'lr_scheduler') == 'ReduceLROnPlateau':
247
+ self.scheduler = ReduceLROnPlateau(self.optimizer, mode='min', factor=0.2, patience=10,
248
+ verbose=True, threshold=1e-4, threshold_mode='rel', min_lr=0)
249
+ elif self.config.get('hyperparameter', 'lr_scheduler') == 'CyclicLR':
250
+ self.scheduler = CyclicLR(self.optimizer, base_lr=learning_rate * 0.1, max_lr=learning_rate,
251
+ mode='triangular', step_size_up=50, step_size_down=50, cycle_momentum=False)
252
+ else:
253
+ raise ValueError('Unknown lr_scheduler: {}'.format(self.config.getfloat('hyperparameter', 'lr_scheduler')))
254
+ self.load_resume()
255
+
256
+ def load_pretrained(self):
257
+ pretrained = self.config.get('train', 'pretrained')
258
+ if pretrained:
259
+ if os.path.isfile(pretrained):
260
+ checkpoint = torch.load(pretrained, map_location=self.device)
261
+ pretrained_dict = checkpoint['state_dict']
262
+ model_dict = self.model.state_dict()
263
+
264
+ transfer_dict = {}
265
+ for k, v in pretrained_dict.items():
266
+ if v.shape == model_dict[k].shape:
267
+ transfer_dict[k] = v
268
+ print('Use pretrained parameters:', k)
269
+
270
+ model_dict.update(transfer_dict)
271
+ self.model.load_state_dict(model_dict)
272
+ print(f'=> loaded pretrained model at "{pretrained}" (epoch {checkpoint["epoch"]})')
273
+ else:
274
+ print(f'=> no checkpoint found at "{pretrained}"')
275
+
276
+ def load_resume(self):
277
+ resume = self.config.get('train', 'resume')
278
+ if resume:
279
+ if os.path.isfile(resume):
280
+ checkpoint = torch.load(resume, map_location=self.device)
281
+ self.model.load_state_dict(checkpoint['state_dict'])
282
+ self.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
283
+ print(f'=> loaded model at "{resume}" (epoch {checkpoint["epoch"]})')
284
+ else:
285
+ print(f'=> no checkpoint found at "{resume}"')
286
+
287
+ def get_dataset(self, only_get_graph=False):
288
+ dataset = HData(
289
+ raw_data_dir=self.config.get('basic', 'raw_dir'),
290
+ graph_dir=self.config.get('basic', 'graph_dir'),
291
+ interface=self.config.get('basic', 'interface'),
292
+ target=self.target,
293
+ dataset_name=self.config.get('basic', 'dataset_name'),
294
+ multiprocessing=self.config.getint('basic', 'multiprocessing', fallback=0),
295
+ radius=self.config.getfloat('graph', 'radius'),
296
+ max_num_nbr=self.config.getint('graph', 'max_num_nbr'),
297
+ num_l=self.config.getint('network', 'num_l'),
298
+ max_element=self.config.getint('basic', 'max_element'),
299
+ create_from_DFT=self.config.getboolean('graph', 'create_from_DFT', fallback=True),
300
+ if_lcmp_graph=self.if_lcmp_graph,
301
+ separate_onsite=self.separate_onsite,
302
+ new_sp=self.new_sp,
303
+ default_dtype_torch=torch.get_default_dtype(),
304
+ )
305
+ if only_get_graph:
306
+ return None, None, None, None
307
+ self.spinful = dataset.info["spinful"]
308
+ self.index_to_Z = dataset.info["index_to_Z"]
309
+ self.Z_to_index = dataset.info["Z_to_index"]
310
+ self.num_species = len(dataset.info["index_to_Z"])
311
+ if self.target != 'E_ij' and self.target != 'E_i':
312
+ dataset = self.make_mask(dataset)
313
+
314
+ dataset_size = len(dataset)
315
+ train_size = int(self.config.getfloat('train', 'train_ratio') * dataset_size)
316
+ val_size = int(self.config.getfloat('train', 'val_ratio') * dataset_size)
317
+ test_size = int(self.config.getfloat('train', 'test_ratio') * dataset_size)
318
+ assert train_size + val_size + test_size <= dataset_size
319
+
320
+ indices = list(range(dataset_size))
321
+ np.random.shuffle(indices)
322
+ print(f'number of train set: {len(indices[:train_size])}')
323
+ print(f'number of val set: {len(indices[train_size:train_size + val_size])}')
324
+ print(f'number of test set: {len(indices[train_size + val_size:train_size + val_size + test_size])}')
325
+ train_sampler = SubsetRandomSampler(indices[:train_size])
326
+ val_sampler = SubsetRandomSampler(indices[train_size:train_size + val_size])
327
+ test_sampler = SubsetRandomSampler(indices[train_size + val_size:train_size + val_size + test_size])
328
+ train_loader = DataLoader(dataset, batch_size=self.config.getint('hyperparameter', 'batch_size'),
329
+ shuffle=False, sampler=train_sampler,
330
+ collate_fn=Collater(self.if_lcmp))
331
+ val_loader = DataLoader(dataset, batch_size=self.config.getint('hyperparameter', 'batch_size'),
332
+ shuffle=False, sampler=val_sampler,
333
+ collate_fn=Collater(self.if_lcmp))
334
+ test_loader = DataLoader(dataset, batch_size=self.config.getint('hyperparameter', 'batch_size'),
335
+ shuffle=False, sampler=test_sampler,
336
+ collate_fn=Collater(self.if_lcmp))
337
+
338
+ if self.config.getboolean('basic', 'statistics'):
339
+ sample_label = torch.cat([dataset[i].label for i in range(len(dataset))])
340
+ sample_mask = torch.cat([dataset[i].mask for i in range(len(dataset))])
341
+ mean_value = abs(sample_label).sum(dim=0) / sample_mask.sum(dim=0)
342
+ import matplotlib.pyplot as plt
343
+ len_matrix = int(sqrt(self.out_fea_len))
344
+ if len_matrix ** 2 != self.out_fea_len:
345
+ raise ValueError
346
+ mean_value = mean_value.reshape(len_matrix, len_matrix)
347
+ im = plt.imshow(mean_value, cmap='Blues')
348
+ plt.colorbar(im)
349
+ plt.xticks(range(len_matrix), range(len_matrix))
350
+ plt.yticks(range(len_matrix), range(len_matrix))
351
+ plt.xlabel(r'Orbital $\beta$')
352
+ plt.ylabel(r'Orbital $\alpha$')
353
+ plt.title(r'Mean of abs($H^\prime_{i\alpha, j\beta}$)')
354
+ plt.tight_layout()
355
+ plt.savefig(os.path.join(self.config.get('basic', 'save_dir'), 'mean.png'), dpi=800)
356
+ np.savetxt(os.path.join(self.config.get('basic', 'save_dir'), 'mean.dat'), mean_value.numpy())
357
+
358
+ print(f"The statistical results are saved to {os.path.join(self.config.get('basic', 'save_dir'), 'mean.dat')}")
359
+
360
+ normalizer = self.config.getboolean('basic', 'normalizer')
361
+ boxcox = self.config.getboolean('basic', 'boxcox')
362
+ if normalizer == False and boxcox == False:
363
+ transform = Transform()
364
+ else:
365
+ sample_label = torch.cat([dataset[i].label for i in range(len(dataset))])
366
+ sample_mask = torch.cat([dataset[i].mask for i in range(len(dataset))])
367
+ transform = Transform(sample_label, mask=sample_mask, normalizer=normalizer, boxcox=boxcox)
368
+ print(transform.state_dict())
369
+
370
+ return train_loader, val_loader, test_loader, transform
371
+
372
+ def make_mask(self, dataset):
373
+ dataset_mask = []
374
+ for data in dataset:
375
+ if self.target == 'hamiltonian' or self.target == 'phiVdphi' or self.target == 'density_matrix':
376
+ Oij_value = data.term_real
377
+ if data.term_real is not None:
378
+ if_only_rc = False
379
+ else:
380
+ if_only_rc = True
381
+ elif self.target == 'O_ij':
382
+ if self.O_component == 'H_minimum':
383
+ Oij_value = data.rvdee + data.rvxc
384
+ elif self.O_component == 'H_minimum_withNA':
385
+ Oij_value = data.rvna + data.rvdee + data.rvxc
386
+ elif self.O_component == 'H':
387
+ Oij_value = data.rh
388
+ elif self.O_component == 'Rho':
389
+ Oij_value = data.rdm
390
+ else:
391
+ raise ValueError(f'Unknown O_component: {self.O_component}')
392
+ if_only_rc = False
393
+ else:
394
+ raise ValueError(f'Unknown target: {self.target}')
395
+ if if_only_rc == False:
396
+ if not torch.all(data.term_mask):
397
+ raise NotImplementedError("Not yet have support for graph radius including hopping without calculation")
398
+
399
+ if self.spinful:
400
+ if self.target == 'phiVdphi':
401
+ raise NotImplementedError("Not yet have support for phiVdphi")
402
+ else:
403
+ out_fea_len = self.num_orbital * 8
404
+ else:
405
+ if self.target == 'phiVdphi':
406
+ out_fea_len = self.num_orbital * 3
407
+ else:
408
+ out_fea_len = self.num_orbital
409
+ mask = torch.zeros(data.edge_attr.shape[0], out_fea_len, dtype=torch.int8)
410
+ label = torch.zeros(data.edge_attr.shape[0], out_fea_len, dtype=torch.get_default_dtype())
411
+
412
+ atomic_number_edge_i = self.index_to_Z[data.x[data.edge_index[0]]]
413
+ atomic_number_edge_j = self.index_to_Z[data.x[data.edge_index[1]]]
414
+
415
+ for index_out, orbital_dict in enumerate(self.orbital):
416
+ for N_M_str, a_b in orbital_dict.items():
417
+ # N_M, a_b means: H_{ia, jb} when the atomic number of atom i is N and the atomic number of atom j is M
418
+ condition_atomic_number_i, condition_atomic_number_j = map(lambda x: int(x), N_M_str.split())
419
+ condition_orbital_i, condition_orbital_j = a_b
420
+
421
+ if self.spinful:
422
+ if self.target == 'phiVdphi':
423
+ raise NotImplementedError("Not yet have support for phiVdphi")
424
+ else:
425
+ mask[:, 8 * index_out:8 * (index_out + 1)] = torch.where(
426
+ (atomic_number_edge_i == condition_atomic_number_i)
427
+ & (atomic_number_edge_j == condition_atomic_number_j),
428
+ 1,
429
+ 0
430
+ )[:, None].repeat(1, 8)
431
+ else:
432
+ if self.target == 'phiVdphi':
433
+ mask[:, 3 * index_out:3 * (index_out + 1)] += torch.where(
434
+ (atomic_number_edge_i == condition_atomic_number_i)
435
+ & (atomic_number_edge_j == condition_atomic_number_j),
436
+ 1,
437
+ 0
438
+ )[:, None].repeat(1, 3)
439
+ else:
440
+ mask[:, index_out] += torch.where(
441
+ (atomic_number_edge_i == condition_atomic_number_i)
442
+ & (atomic_number_edge_j == condition_atomic_number_j),
443
+ 1,
444
+ 0
445
+ )
446
+
447
+ if if_only_rc == False:
448
+ if self.spinful:
449
+ if self.target == 'phiVdphi':
450
+ raise NotImplementedError
451
+ else:
452
+ label[:, 8 * index_out:8 * (index_out + 1)] = torch.where(
453
+ (atomic_number_edge_i == condition_atomic_number_i)
454
+ & (atomic_number_edge_j == condition_atomic_number_j),
455
+ Oij_value[:, condition_orbital_i, condition_orbital_j].t(),
456
+ torch.zeros(8, data.edge_attr.shape[0], dtype=torch.get_default_dtype())
457
+ ).t()
458
+ else:
459
+ if self.target == 'phiVdphi':
460
+ label[:, 3 * index_out:3 * (index_out + 1)] = torch.where(
461
+ (atomic_number_edge_i == condition_atomic_number_i)
462
+ & (atomic_number_edge_j == condition_atomic_number_j),
463
+ Oij_value[:, condition_orbital_i, condition_orbital_j].t(),
464
+ torch.zeros(3, data.edge_attr.shape[0], dtype=torch.get_default_dtype())
465
+ ).t()
466
+ else:
467
+ label[:, index_out] += torch.where(
468
+ (atomic_number_edge_i == condition_atomic_number_i)
469
+ & (atomic_number_edge_j == condition_atomic_number_j),
470
+ Oij_value[:, condition_orbital_i, condition_orbital_j],
471
+ torch.zeros(data.edge_attr.shape[0], dtype=torch.get_default_dtype())
472
+ )
473
+ assert len(torch.where((mask != 1) & (mask != 0))[0]) == 0
474
+ mask = mask.bool()
475
+ data.mask = mask
476
+ del data.term_mask
477
+ if if_only_rc == False:
478
+ data.label = label
479
+ if self.target == 'hamiltonian' or self.target == 'density_matrix':
480
+ del data.term_real
481
+ elif self.target == 'O_ij':
482
+ del data.rh
483
+ del data.rdm
484
+ del data.rvdee
485
+ del data.rvxc
486
+ del data.rvna
487
+ dataset_mask.append(data)
488
+ return dataset_mask
489
+
490
+ def train(self, train_loader, val_loader, test_loader):
491
+ begin_time = time.time()
492
+ self.best_val_loss = 1e10
493
+ if self.config.getboolean('train', 'revert_then_decay'):
494
+ lr_step = 0
495
+
496
+ revert_decay_epoch = json.loads(self.config.get('train', 'revert_decay_epoch'))
497
+ revert_decay_gamma = json.loads(self.config.get('train', 'revert_decay_gamma'))
498
+ assert len(revert_decay_epoch) == len(revert_decay_gamma)
499
+ lr_step_num = len(revert_decay_epoch)
500
+
501
+ try:
502
+ for epoch in range(self.config.getint('train', 'epochs')):
503
+ if self.config.getboolean('train', 'switch_sgd') and epoch == self.config.getint('train', 'switch_sgd_epoch'):
504
+ model_parameters = filter(lambda p: p.requires_grad, self.model.parameters())
505
+ self.optimizer = optim.SGD(model_parameters, lr=self.config.getfloat('train', 'switch_sgd_lr'))
506
+ print(f"Switch to sgd (epoch: {epoch})")
507
+
508
+ learning_rate = self.optimizer.param_groups[0]['lr']
509
+ if self.if_tensorboard:
510
+ self.tb_writer.add_scalar('Learning rate', learning_rate, global_step=epoch)
511
+
512
+ # train
513
+ train_losses = self.kernel_fn(train_loader, 'TRAIN')
514
+ if self.if_tensorboard:
515
+ self.tb_writer.add_scalars('loss', {'Train loss': train_losses.avg}, global_step=epoch)
516
+
517
+ # val
518
+ with torch.no_grad():
519
+ val_losses = self.kernel_fn(val_loader, 'VAL')
520
+ if val_losses.avg > self.config.getfloat('train', 'revert_threshold') * self.best_val_loss:
521
+ print(f'Epoch #{epoch:01d} \t| '
522
+ f'Learning rate: {learning_rate:0.2e} \t| '
523
+ f'Epoch time: {time.time() - begin_time:.2f} \t| '
524
+ f'Train loss: {train_losses.avg:.8f} \t| '
525
+ f'Val loss: {val_losses.avg:.8f} \t| '
526
+ f'Best val loss: {self.best_val_loss:.8f}.'
527
+ )
528
+ best_checkpoint = torch.load(os.path.join(self.config.get('basic', 'save_dir'), 'best_state_dict.pkl'))
529
+ self.model.load_state_dict(best_checkpoint['state_dict'])
530
+ self.optimizer.load_state_dict(best_checkpoint['optimizer_state_dict'])
531
+ if self.config.getboolean('train', 'revert_then_decay'):
532
+ if lr_step < lr_step_num:
533
+ for param_group in self.optimizer.param_groups:
534
+ param_group['lr'] = learning_rate * revert_decay_gamma[lr_step]
535
+ lr_step += 1
536
+ with torch.no_grad():
537
+ val_losses = self.kernel_fn(val_loader, 'VAL')
538
+ print(f"Revert (threshold: {self.config.getfloat('train', 'revert_threshold')}) to epoch {best_checkpoint['epoch']} \t| Val loss: {val_losses.avg:.8f}")
539
+ if self.if_tensorboard:
540
+ self.tb_writer.add_scalars('loss', {'Validation loss': val_losses.avg}, global_step=epoch)
541
+
542
+ if self.config.get('hyperparameter', 'lr_scheduler') == 'MultiStepLR':
543
+ self.scheduler.step()
544
+ elif self.config.get('hyperparameter', 'lr_scheduler') == 'ReduceLROnPlateau':
545
+ self.scheduler.step(val_losses.avg)
546
+ elif self.config.get('hyperparameter', 'lr_scheduler') == 'CyclicLR':
547
+ self.scheduler.step()
548
+ continue
549
+ if self.if_tensorboard:
550
+ self.tb_writer.add_scalars('loss', {'Validation loss': val_losses.avg}, global_step=epoch)
551
+
552
+ if self.config.getboolean('train', 'revert_then_decay'):
553
+ if lr_step < lr_step_num and epoch >= revert_decay_epoch[lr_step]:
554
+ for param_group in self.optimizer.param_groups:
555
+ param_group['lr'] *= revert_decay_gamma[lr_step]
556
+ lr_step += 1
557
+
558
+ is_best = val_losses.avg < self.best_val_loss
559
+ self.best_val_loss = min(val_losses.avg, self.best_val_loss)
560
+
561
+ save_complete = False
562
+ while not save_complete:
563
+ try:
564
+ save_model({
565
+ 'epoch': epoch + 1,
566
+ 'optimizer_state_dict': self.optimizer.state_dict(),
567
+ 'best_val_loss': self.best_val_loss,
568
+ 'spinful': self.spinful,
569
+ 'Z_to_index': self.Z_to_index,
570
+ 'index_to_Z': self.index_to_Z,
571
+ }, {'model': self.model}, {'state_dict': self.model.state_dict()},
572
+ path=self.config.get('basic', 'save_dir'), is_best=is_best)
573
+ save_complete = True
574
+ except KeyboardInterrupt:
575
+ print('\nKeyboardInterrupt while saving model to disk')
576
+
577
+ if self.config.get('hyperparameter', 'lr_scheduler') == 'MultiStepLR':
578
+ self.scheduler.step()
579
+ elif self.config.get('hyperparameter', 'lr_scheduler') == 'ReduceLROnPlateau':
580
+ self.scheduler.step(val_losses.avg)
581
+ elif self.config.get('hyperparameter', 'lr_scheduler') == 'CyclicLR':
582
+ self.scheduler.step()
583
+
584
+ print(f'Epoch #{epoch:01d} \t| '
585
+ f'Learning rate: {learning_rate:0.2e} \t| '
586
+ f'Epoch time: {time.time() - begin_time:.2f} \t| '
587
+ f'Train loss: {train_losses.avg:.8f} \t| '
588
+ f'Val loss: {val_losses.avg:.8f} \t| '
589
+ f'Best val loss: {self.best_val_loss:.8f}.'
590
+ )
591
+
592
+ if val_losses.avg < self.config.getfloat('train', 'early_stopping_loss'):
593
+ print(f"Early stopping because the target accuracy (validation loss < {self.config.getfloat('train', 'early_stopping_loss')}) is achieved at eopch #{epoch:01d}")
594
+ break
595
+ if epoch > self.early_stopping_loss_epoch[1] and val_losses.avg < self.early_stopping_loss_epoch[0]:
596
+ print(f"Early stopping because the target accuracy (validation loss < {self.early_stopping_loss_epoch[0]} and epoch > {self.early_stopping_loss_epoch[1]}) is achieved at eopch #{epoch:01d}")
597
+ break
598
+
599
+ begin_time = time.time()
600
+ except KeyboardInterrupt:
601
+ print('\nKeyboardInterrupt')
602
+
603
+ print('---------Evaluate Model on Test Set---------------')
604
+ best_checkpoint = torch.load(os.path.join(self.config.get('basic', 'save_dir'), 'best_state_dict.pkl'))
605
+ self.model.load_state_dict(best_checkpoint['state_dict'])
606
+ print("=> load best checkpoint (epoch {})".format(best_checkpoint['epoch']))
607
+ with torch.no_grad():
608
+ test_csv_name = 'test_results.csv'
609
+ train_csv_name = 'train_results.csv'
610
+ val_csv_name = 'val_results.csv'
611
+
612
+ if self.config.getboolean('basic', 'save_csv'):
613
+ tmp = 'TEST'
614
+ else:
615
+ tmp = 'VAL'
616
+ test_losses = self.kernel_fn(test_loader, tmp, test_csv_name, output_E=True)
617
+ print(f'Test loss: {test_losses.avg:.8f}.')
618
+ if self.if_tensorboard:
619
+ self.tb_writer.add_scalars('loss', {'Test loss': test_losses.avg}, global_step=epoch)
620
+ test_losses = self.kernel_fn(train_loader, tmp, train_csv_name, output_E=True)
621
+ print(f'Train loss: {test_losses.avg:.8f}.')
622
+ test_losses = self.kernel_fn(val_loader, tmp, val_csv_name, output_E=True)
623
+ print(f'Val loss: {test_losses.avg:.8f}.')
624
+
625
+ def predict(self, hamiltonian_dirs):
626
+ raise NotImplementedError
627
+
628
+ def kernel_fn(self, loader, task: str, save_name=None, output_E=False):
629
+ assert task in ['TRAIN', 'VAL', 'TEST']
630
+
631
+ losses = LossRecord()
632
+ if task == 'TRAIN':
633
+ self.model.train()
634
+ else:
635
+ self.model.eval()
636
+ if task == 'TEST':
637
+ assert save_name != None
638
+ if self.target == "E_i" or self.target == "E_ij":
639
+ test_targets = []
640
+ test_preds = []
641
+ test_ids = []
642
+ test_atom_ids = []
643
+ test_atomic_numbers = []
644
+ else:
645
+ test_targets = []
646
+ test_preds = []
647
+ test_ids = []
648
+ test_atom_ids = []
649
+ test_atomic_numbers = []
650
+ test_edge_infos = []
651
+
652
+ if task != 'TRAIN' and (self.out_fea_len != 1):
653
+ losses_each_out = [LossRecord() for _ in range(self.out_fea_len)]
654
+ for step, batch_tuple in enumerate(loader):
655
+ if self.if_lcmp:
656
+ batch, subgraph = batch_tuple
657
+ sub_atom_idx, sub_edge_idx, sub_edge_ang, sub_index = subgraph
658
+ output = self.model(
659
+ batch.x.to(self.device),
660
+ batch.edge_index.to(self.device),
661
+ batch.edge_attr.to(self.device),
662
+ batch.batch.to(self.device),
663
+ sub_atom_idx.to(self.device),
664
+ sub_edge_idx.to(self.device),
665
+ sub_edge_ang.to(self.device),
666
+ sub_index.to(self.device)
667
+ )
668
+ else:
669
+ batch = batch_tuple
670
+ output = self.model(
671
+ batch.x.to(self.device),
672
+ batch.edge_index.to(self.device),
673
+ batch.edge_attr.to(self.device),
674
+ batch.batch.to(self.device)
675
+ )
676
+ if self.target == 'E_ij':
677
+ if self.energy_component == 'E_ij':
678
+ label_non_onsite = batch.E_ij.to(self.device)
679
+ label_onsite = batch.onsite_E_ij.to(self.device)
680
+ elif self.energy_component == 'summation':
681
+ label_non_onsite = batch.E_delta_ee_ij.to(self.device) + batch.E_xc_ij.to(self.device)
682
+ label_onsite = batch.onsite_E_delta_ee_ij.to(self.device) + batch.onsite_E_xc_ij.to(self.device)
683
+ elif self.energy_component == 'delta_ee':
684
+ label_non_onsite = batch.E_delta_ee_ij.to(self.device)
685
+ label_onsite = batch.onsite_E_delta_ee_ij.to(self.device)
686
+ elif self.energy_component == 'xc':
687
+ label_non_onsite = batch.E_xc_ij.to(self.device)
688
+ label_onsite = batch.onsite_E_xc_ij.to(self.device)
689
+ elif self.energy_component == 'both':
690
+ raise NotImplementedError
691
+ output_onsite, output_non_onsite = output
692
+ if self.retain_edge_fea is False:
693
+ output_non_onsite = output_non_onsite * 0
694
+
695
+ elif self.target == 'E_i':
696
+ label = batch.E_i.to(self.device)
697
+ output = output.reshape(label.shape)
698
+ else:
699
+ label = batch.label.to(self.device)
700
+ output = output.reshape(label.shape)
701
+
702
+ if self.target == 'E_i':
703
+ loss = self.criterion(output, label)
704
+ elif self.target == 'E_ij':
705
+ loss_Eij = self.criterion(torch.cat([output_onsite, output_non_onsite], dim=0),
706
+ torch.cat([label_onsite, label_non_onsite], dim=0))
707
+ output_non_onsite_Ei = scatter_add(output_non_onsite, batch.edge_index.to(self.device)[0, :], dim=0)
708
+ label_non_onsite_Ei = scatter_add(label_non_onsite, batch.edge_index.to(self.device)[0, :], dim=0)
709
+ output_Ei = output_non_onsite_Ei + output_onsite
710
+ label_Ei = label_non_onsite_Ei + label_onsite
711
+ loss_Ei = self.criterion(output_Ei, label_Ei)
712
+ loss_Etot = self.criterion(scatter_add(output_Ei, batch.batch.to(self.device), dim=0),
713
+ scatter_add(label_Ei, batch.batch.to(self.device), dim=0))
714
+ loss = loss_Eij * self.lambda_Eij + loss_Ei * self.lambda_Ei + loss_Etot * self.lambda_Etot
715
+ else:
716
+ if self.criterion_name == 'MaskMSELoss':
717
+ mask = batch.mask.to(self.device)
718
+ loss = self.criterion(output, label, mask)
719
+ else:
720
+ raise ValueError(f'Unknown criterion: {self.criterion_name}')
721
+ if task == 'TRAIN':
722
+ if self.config.get('hyperparameter', 'optimizer') == 'lbfgs':
723
+ def closure():
724
+ self.optimizer.zero_grad()
725
+ if self.if_lcmp:
726
+ output = self.model(
727
+ batch.x.to(self.device),
728
+ batch.edge_index.to(self.device),
729
+ batch.edge_attr.to(self.device),
730
+ batch.batch.to(self.device),
731
+ sub_atom_idx.to(self.device),
732
+ sub_edge_idx.to(self.device),
733
+ sub_edge_ang.to(self.device),
734
+ sub_index.to(self.device)
735
+ )
736
+ else:
737
+ output = self.model(
738
+ batch.x.to(self.device),
739
+ batch.edge_index.to(self.device),
740
+ batch.edge_attr.to(self.device),
741
+ batch.batch.to(self.device)
742
+ )
743
+ loss = self.criterion(output, label.to(self.device), mask)
744
+ loss.backward()
745
+ return loss
746
+
747
+ self.optimizer.step(closure)
748
+ else:
749
+ self.optimizer.zero_grad()
750
+ loss.backward()
751
+ if self.config.getboolean('train', 'clip_grad'):
752
+ clip_grad_norm_(self.model.parameters(), self.config.getfloat('train', 'clip_grad_value'))
753
+ self.optimizer.step()
754
+
755
+ if self.target == "E_i" or self.target == "E_ij":
756
+ losses.update(loss.item(), batch.num_nodes)
757
+ else:
758
+ if self.criterion_name == 'MaskMSELoss':
759
+ losses.update(loss.item(), mask.sum())
760
+ if task != 'TRAIN' and self.out_fea_len != 1:
761
+ if self.criterion_name == 'MaskMSELoss':
762
+ se_each_out = torch.pow(output - label.to(self.device), 2)
763
+ for index_out, losses_each_out_for in enumerate(losses_each_out):
764
+ count = mask[:, index_out].sum().item()
765
+ if count == 0:
766
+ losses_each_out_for.update(-1, 1)
767
+ else:
768
+ losses_each_out_for.update(
769
+ torch.masked_select(se_each_out[:, index_out], mask[:, index_out]).mean().item(),
770
+ count
771
+ )
772
+ if task == 'TEST':
773
+ if self.target == "E_ij":
774
+ test_targets += torch.squeeze(label_Ei.detach().cpu()).tolist()
775
+ test_preds += torch.squeeze(output_Ei.detach().cpu()).tolist()
776
+ test_ids += np.array(batch.stru_id)[torch.squeeze(batch.batch).numpy()].tolist()
777
+ test_atom_ids += torch.squeeze(
778
+ torch.tensor(range(batch.num_nodes)) - torch.tensor(batch.__slices__['x'])[
779
+ batch.batch]).tolist()
780
+ test_atomic_numbers += torch.squeeze(self.index_to_Z[batch.x]).tolist()
781
+ elif self.target == "E_i":
782
+ test_targets = torch.squeeze(label.detach().cpu()).tolist()
783
+ test_preds = torch.squeeze(output.detach().cpu()).tolist()
784
+ test_ids = np.array(batch.stru_id)[torch.squeeze(batch.batch).numpy()].tolist()
785
+ test_atom_ids += torch.squeeze(torch.tensor(range(batch.num_nodes)) - torch.tensor(batch.__slices__['x'])[batch.batch]).tolist()
786
+ test_atomic_numbers += torch.squeeze(self.index_to_Z[batch.x]).tolist()
787
+ else:
788
+ edge_stru_index = torch.squeeze(batch.batch[batch.edge_index[0]]).numpy()
789
+ edge_slices = torch.tensor(batch.__slices__['x'])[edge_stru_index].view(-1, 1)
790
+ test_preds += torch.squeeze(output.detach().cpu()).tolist()
791
+ test_targets += torch.squeeze(label.detach().cpu()).tolist()
792
+ test_ids += np.array(batch.stru_id)[edge_stru_index].tolist()
793
+ test_atom_ids += torch.squeeze(batch.edge_index.T - edge_slices).tolist()
794
+ test_atomic_numbers += torch.squeeze(self.index_to_Z[batch.x[batch.edge_index.T]]).tolist()
795
+ test_edge_infos += torch.squeeze(batch.edge_attr[:, :7].detach().cpu()).tolist()
796
+ if output_E is True:
797
+ if self.target == 'E_ij':
798
+ output_non_onsite_Ei = scatter_add(output_non_onsite, batch.edge_index.to(self.device)[1, :], dim=0)
799
+ label_non_onsite_Ei = scatter_add(label_non_onsite, batch.edge_index.to(self.device)[1, :], dim=0)
800
+ output_Ei = output_non_onsite_Ei + output_onsite
801
+ label_Ei = label_non_onsite_Ei + label_onsite
802
+ Etot_error = abs(scatter_add(output_Ei, batch.batch.to(self.device), dim=0)
803
+ - scatter_add(label_Ei, batch.batch.to(self.device), dim=0)).reshape(-1).tolist()
804
+ for test_stru_id, test_error in zip(batch.stru_id, Etot_error):
805
+ print(f'{test_stru_id}: {test_error * 1000:.2f} meV / unit_cell')
806
+ elif self.target == 'E_i':
807
+ Etot_error = abs(scatter_add(output, batch.batch.to(self.device), dim=0)
808
+ - scatter_add(label, batch.batch.to(self.device), dim=0)).reshape(-1).tolist()
809
+ for test_stru_id, test_error in zip(batch.stru_id, Etot_error):
810
+ print(f'{test_stru_id}: {test_error * 1000:.2f} meV / unit_cell')
811
+
812
+ if task != 'TRAIN' and (self.out_fea_len != 1):
813
+ print('%s loss each out:' % task)
814
+ loss_list = list(map(lambda x: f'{x.avg:0.1e}', losses_each_out))
815
+ print('[' + ', '.join(loss_list) + ']')
816
+ loss_list = list(map(lambda x: x.avg, losses_each_out))
817
+ print(f'max orbital: {max(loss_list):0.1e} (0-based index: {np.argmax(loss_list)})')
818
+ if task == 'TEST':
819
+ with open(os.path.join(self.config.get('basic', 'save_dir'), save_name), 'w', newline='') as f:
820
+ writer = csv.writer(f)
821
+ if self.target == "E_i" or self.target == "E_ij":
822
+ writer.writerow(['stru_id', 'atom_id', 'atomic_number'] +
823
+ ['target'] * self.out_fea_len + ['pred'] * self.out_fea_len)
824
+ for stru_id, atom_id, atomic_number, target, pred in zip(test_ids, test_atom_ids,
825
+ test_atomic_numbers,
826
+ test_targets, test_preds):
827
+ if self.out_fea_len == 1:
828
+ writer.writerow((stru_id, atom_id, atomic_number, target, pred))
829
+ else:
830
+ writer.writerow((stru_id, atom_id, atomic_number, *target, *pred))
831
+
832
+ else:
833
+ writer.writerow(['stru_id', 'atom_id', 'atomic_number', 'dist', 'atom1_x', 'atom1_y', 'atom1_z',
834
+ 'atom2_x', 'atom2_y', 'atom2_z']
835
+ + ['target'] * self.out_fea_len + ['pred'] * self.out_fea_len)
836
+ for stru_id, atom_id, atomic_number, edge_info, target, pred in zip(test_ids, test_atom_ids,
837
+ test_atomic_numbers,
838
+ test_edge_infos, test_targets,
839
+ test_preds):
840
+ if self.out_fea_len == 1:
841
+ writer.writerow((stru_id, atom_id, atomic_number, *edge_info, target, pred))
842
+ else:
843
+ writer.writerow((stru_id, atom_id, atomic_number, *edge_info, *target, *pred))
844
+ return losses
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/model.py ADDED
@@ -0,0 +1,676 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from typing import Union, Tuple
3
+ from math import ceil, sqrt
4
+
5
+ import torch
6
+ from torch import nn
7
+ import torch.nn.functional as F
8
+ from torch_geometric.nn.conv import MessagePassing
9
+ from torch_geometric.nn.norm import LayerNorm, PairNorm, InstanceNorm
10
+ from torch_geometric.typing import PairTensor, Adj, OptTensor, Size
11
+ from torch_geometric.nn.inits import glorot, zeros
12
+ from torch_geometric.utils import softmax
13
+ from torch_geometric.nn.models.dimenet import BesselBasisLayer
14
+ from torch_scatter import scatter_add, scatter
15
+ import numpy as np
16
+ from scipy.special import comb
17
+
18
+ from .from_se3_transformer import SphericalHarmonics
19
+ from .from_schnetpack import GaussianBasis
20
+ from .from_PyG_future import GraphNorm, DiffGroupNorm
21
+ from .from_HermNet import RBF, cosine_cutoff, ShiftedSoftplus, _eps
22
+
23
+
24
+ class ExpBernsteinBasis(nn.Module):
25
+ def __init__(self, K, gamma, cutoff, trainable=True):
26
+ super(ExpBernsteinBasis, self).__init__()
27
+ self.K = K
28
+ if trainable:
29
+ self.gamma = nn.Parameter(torch.tensor(gamma))
30
+ else:
31
+ self.gamma = torch.tensor(gamma)
32
+ self.register_buffer('cutoff', torch.tensor(cutoff))
33
+ self.register_buffer('comb_k', torch.Tensor(comb(K - 1, np.arange(K))))
34
+
35
+ def forward(self, distances):
36
+ f_zero = torch.zeros_like(distances)
37
+ f_cut = torch.where(distances < self.cutoff, torch.exp(
38
+ -(distances ** 2) / (self.cutoff ** 2 - distances ** 2)), f_zero)
39
+ x = torch.exp(-self.gamma * distances)
40
+ out = []
41
+ for k in range(self.K):
42
+ out.append((x ** k) * ((1 - x) ** (self.K - 1 - k)))
43
+ out = torch.stack(out, dim=-1)
44
+ out = out * self.comb_k[None, :] * f_cut[:, None]
45
+ return out
46
+
47
+
48
+ def get_spherical_from_cartesian(cartesian, cartesian_x=1, cartesian_y=2, cartesian_z=0):
49
+ spherical = torch.zeros_like(cartesian[..., 0:2])
50
+ r_xy = cartesian[..., cartesian_x] ** 2 + cartesian[..., cartesian_y] ** 2
51
+ spherical[..., 0] = torch.atan2(torch.sqrt(r_xy), cartesian[..., cartesian_z])
52
+ spherical[..., 1] = torch.atan2(cartesian[..., cartesian_y], cartesian[..., cartesian_x])
53
+ return spherical
54
+
55
+
56
+ class SphericalHarmonicsBasis(nn.Module):
57
+ def __init__(self, num_l=5):
58
+ super(SphericalHarmonicsBasis, self).__init__()
59
+ self.num_l = num_l
60
+
61
+ def forward(self, edge_attr):
62
+ r_vec = edge_attr[:, 1:4] - edge_attr[:, 4:7]
63
+ r_vec_sp = get_spherical_from_cartesian(r_vec)
64
+ sph_harm_func = SphericalHarmonics()
65
+
66
+ angular_expansion = []
67
+ for l in range(self.num_l):
68
+ angular_expansion.append(sph_harm_func.get(l, r_vec_sp[:, 0], r_vec_sp[:, 1]))
69
+ angular_expansion = torch.cat(angular_expansion, dim=-1)
70
+
71
+ return angular_expansion
72
+
73
+
74
+ """
75
+ The class CGConv below is extended from "https://github.com/rusty1s/pytorch_geometric", which has the MIT License below
76
+
77
+ ---------------------------------------------------------------------------
78
+ Copyright (c) 2020 Matthias Fey <matthias.fey@tu-dortmund.de>
79
+
80
+ Permission is hereby granted, free of charge, to any person obtaining a copy
81
+ of this software and associated documentation files (the "Software"), to deal
82
+ in the Software without restriction, including without limitation the rights
83
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
84
+ copies of the Software, and to permit persons to whom the Software is
85
+ furnished to do so, subject to the following conditions:
86
+
87
+ The above copyright notice and this permission notice shall be included in
88
+ all copies or substantial portions of the Software.
89
+
90
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
91
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
92
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
93
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
94
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
95
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
96
+ THE SOFTWARE.
97
+ """
98
+ class CGConv(MessagePassing):
99
+ def __init__(self, channels: Union[int, Tuple[int, int]], dim: int = 0,
100
+ aggr: str = 'add', normalization: str = None,
101
+ bias: bool = True, if_exp: bool = False, **kwargs):
102
+ super(CGConv, self).__init__(aggr=aggr, flow="source_to_target", **kwargs)
103
+ self.channels = channels
104
+ self.dim = dim
105
+ self.normalization = normalization
106
+ self.if_exp = if_exp
107
+
108
+ if isinstance(channels, int):
109
+ channels = (channels, channels)
110
+
111
+ self.lin_f = nn.Linear(sum(channels) + dim, channels[1], bias=bias)
112
+ self.lin_s = nn.Linear(sum(channels) + dim, channels[1], bias=bias)
113
+ if self.normalization == 'BatchNorm':
114
+ self.bn = nn.BatchNorm1d(channels[1], track_running_stats=True)
115
+ elif self.normalization == 'LayerNorm':
116
+ self.ln = LayerNorm(channels[1])
117
+ elif self.normalization == 'PairNorm':
118
+ self.pn = PairNorm(channels[1])
119
+ elif self.normalization == 'InstanceNorm':
120
+ self.instance_norm = InstanceNorm(channels[1])
121
+ elif self.normalization == 'GraphNorm':
122
+ self.gn = GraphNorm(channels[1])
123
+ elif self.normalization == 'DiffGroupNorm':
124
+ self.group_norm = DiffGroupNorm(channels[1], 128)
125
+ elif self.normalization is None:
126
+ pass
127
+ else:
128
+ raise ValueError('Unknown normalization function: {}'.format(normalization))
129
+
130
+ self.reset_parameters()
131
+
132
+ def reset_parameters(self):
133
+ self.lin_f.reset_parameters()
134
+ self.lin_s.reset_parameters()
135
+ if self.normalization == 'BatchNorm':
136
+ self.bn.reset_parameters()
137
+
138
+ def forward(self, x: Union[torch.Tensor, PairTensor], edge_index: Adj,
139
+ edge_attr: OptTensor, batch, distance, size: Size = None) -> torch.Tensor:
140
+ """"""
141
+ if isinstance(x, torch.Tensor):
142
+ x: PairTensor = (x, x)
143
+
144
+ # propagate_type: (x: PairTensor, edge_attr: OptTensor)
145
+ out = self.propagate(edge_index, x=x, edge_attr=edge_attr, distance=distance, size=size)
146
+ if self.normalization == 'BatchNorm':
147
+ out = self.bn(out)
148
+ elif self.normalization == 'LayerNorm':
149
+ out = self.ln(out, batch)
150
+ elif self.normalization == 'PairNorm':
151
+ out = self.pn(out, batch)
152
+ elif self.normalization == 'InstanceNorm':
153
+ out = self.instance_norm(out, batch)
154
+ elif self.normalization == 'GraphNorm':
155
+ out = self.gn(out, batch)
156
+ elif self.normalization == 'DiffGroupNorm':
157
+ out = self.group_norm(out)
158
+ out += x[1]
159
+ return out
160
+
161
+ def message(self, x_i, x_j, edge_attr: OptTensor, distance) -> torch.Tensor:
162
+ z = torch.cat([x_i, x_j, edge_attr], dim=-1)
163
+ out = self.lin_f(z).sigmoid() * F.softplus(self.lin_s(z))
164
+ if self.if_exp:
165
+ sigma = 3
166
+ n = 2
167
+ out = out * torch.exp(-distance ** n / sigma ** n / 2).view(-1, 1)
168
+ return out
169
+
170
+ def __repr__(self):
171
+ return '{}({}, dim={})'.format(self.__class__.__name__, self.channels, self.dim)
172
+
173
+
174
+ class GAT_Crystal(MessagePassing):
175
+ def __init__(self, in_features, out_features, edge_dim, heads, concat=False, normalization: str = None,
176
+ dropout=0, bias=True, **kwargs):
177
+ super(GAT_Crystal, self).__init__(node_dim=0, aggr='add', flow='target_to_source', **kwargs)
178
+ self.in_features = in_features
179
+ self.out_features = out_features
180
+ self.heads = heads
181
+ self.concat = concat
182
+ self.dropout = dropout
183
+ self.neg_slope = 0.2
184
+ self.prelu = nn.PReLU()
185
+ self.bn1 = nn.BatchNorm1d(heads)
186
+ self.W = nn.Parameter(torch.Tensor(in_features + edge_dim, heads * out_features))
187
+ self.att = nn.Parameter(torch.Tensor(1, heads, 2 * out_features))
188
+
189
+ if bias and concat:
190
+ self.bias = nn.Parameter(torch.Tensor(heads * out_features))
191
+ elif bias and not concat:
192
+ self.bias = nn.Parameter(torch.Tensor(out_features))
193
+ else:
194
+ self.register_parameter('bias', None)
195
+
196
+ self.normalization = normalization
197
+ if self.normalization == 'BatchNorm':
198
+ self.bn = nn.BatchNorm1d(out_features, track_running_stats=True)
199
+ elif self.normalization == 'LayerNorm':
200
+ self.ln = LayerNorm(out_features)
201
+ elif self.normalization == 'PairNorm':
202
+ self.pn = PairNorm(out_features)
203
+ elif self.normalization == 'InstanceNorm':
204
+ self.instance_norm = InstanceNorm(out_features)
205
+ elif self.normalization == 'GraphNorm':
206
+ self.gn = GraphNorm(out_features)
207
+ elif self.normalization == 'DiffGroupNorm':
208
+ self.group_norm = DiffGroupNorm(out_features, 128)
209
+ elif self.normalization is None:
210
+ pass
211
+ else:
212
+ raise ValueError('Unknown normalization function: {}'.format(normalization))
213
+
214
+ self.reset_parameters()
215
+
216
+ def reset_parameters(self):
217
+ glorot(self.W)
218
+ glorot(self.att)
219
+ zeros(self.bias)
220
+
221
+ def forward(self, x, edge_index, edge_attr, batch, distance):
222
+ out = self.propagate(edge_index, x=x, edge_attr=edge_attr)
223
+
224
+ if self.normalization == 'BatchNorm':
225
+ out = self.bn(out)
226
+ elif self.normalization == 'LayerNorm':
227
+ out = self.ln(out, batch)
228
+ elif self.normalization == 'PairNorm':
229
+ out = self.pn(out, batch)
230
+ elif self.normalization == 'InstanceNorm':
231
+ out = self.instance_norm(out, batch)
232
+ elif self.normalization == 'GraphNorm':
233
+ out = self.gn(out, batch)
234
+ elif self.normalization == 'DiffGroupNorm':
235
+ out = self.group_norm(out)
236
+ return out
237
+
238
+ def message(self, edge_index_i, x_i, x_j, size_i, index, ptr: OptTensor, edge_attr):
239
+ x_i = torch.cat([x_i, edge_attr], dim=-1)
240
+ x_j = torch.cat([x_j, edge_attr], dim=-1)
241
+
242
+ x_i = F.softplus(torch.matmul(x_i, self.W))
243
+ x_j = F.softplus(torch.matmul(x_j, self.W))
244
+ x_i = x_i.view(-1, self.heads, self.out_features)
245
+ x_j = x_j.view(-1, self.heads, self.out_features)
246
+
247
+ alpha = F.softplus((torch.cat([x_i, x_j], dim=-1) * self.att).sum(dim=-1))
248
+ alpha = F.softplus(self.bn1(alpha))
249
+
250
+ alpha = softmax(alpha, index, ptr, size_i)
251
+
252
+ alpha = F.dropout(alpha, p=self.dropout, training=self.training)
253
+
254
+ return x_j * alpha.view(-1, self.heads, 1)
255
+
256
+ def update(self, aggr_out, x):
257
+ if self.concat is True:
258
+ aggr_out = aggr_out.view(-1, self.heads * self.out_features)
259
+ else:
260
+ aggr_out = aggr_out.mean(dim=1)
261
+ if self.bias is not None: aggr_out = aggr_out + self.bias
262
+ return aggr_out
263
+
264
+
265
+ class PaninnNodeFea():
266
+ def __init__(self, node_fea_s, node_fea_v=None):
267
+ self.node_fea_s = node_fea_s
268
+ if node_fea_v == None:
269
+ self.node_fea_v = torch.zeros(node_fea_s.shape[0], node_fea_s.shape[1], 3, dtype=node_fea_s.dtype,
270
+ device=node_fea_s.device)
271
+ else:
272
+ self.node_fea_v = node_fea_v
273
+
274
+ def __add__(self, other):
275
+ return PaninnNodeFea(self.node_fea_s + other.node_fea_s, self.node_fea_v + other.node_fea_v)
276
+
277
+
278
+ class PAINN(nn.Module):
279
+ def __init__(self, in_features, edge_dim, rc: float, l: int, normalization):
280
+ super(PAINN, self).__init__()
281
+ self.ms1 = nn.Linear(in_features, in_features)
282
+ self.ssp = ShiftedSoftplus()
283
+ self.ms2 = nn.Linear(in_features, in_features * 3)
284
+
285
+ self.rbf = RBF(rc, l)
286
+ self.mv = nn.Linear(l, in_features * 3)
287
+ self.fc = cosine_cutoff(rc)
288
+
289
+ self.us1 = nn.Linear(in_features * 2, in_features)
290
+ self.us2 = nn.Linear(in_features, in_features * 3)
291
+
292
+ self.normalization = normalization
293
+ if self.normalization == 'BatchNorm':
294
+ self.bn = nn.BatchNorm1d(in_features, track_running_stats=True)
295
+ elif self.normalization == 'LayerNorm':
296
+ self.ln = LayerNorm(in_features)
297
+ elif self.normalization == 'PairNorm':
298
+ self.pn = PairNorm(in_features)
299
+ elif self.normalization == 'InstanceNorm':
300
+ self.instance_norm = InstanceNorm(in_features)
301
+ elif self.normalization == 'GraphNorm':
302
+ self.gn = GraphNorm(in_features)
303
+ elif self.normalization == 'DiffGroupNorm':
304
+ self.group_norm = DiffGroupNorm(in_features, 128)
305
+ elif self.normalization is None or self.normalization == 'None':
306
+ pass
307
+ else:
308
+ raise ValueError('Unknown normalization function: {}'.format(normalization))
309
+
310
+ def forward(self, x: Union[torch.Tensor, PairTensor], edge_index: Adj,
311
+ edge_attr: OptTensor, batch, edge_vec) -> torch.Tensor:
312
+ r = torch.sqrt((edge_vec ** 2).sum(dim=-1) + _eps).unsqueeze(-1)
313
+ sj = x.node_fea_s[edge_index[1, :]]
314
+ vj = x.node_fea_v[edge_index[1, :]]
315
+
316
+ phi = self.ms2(self.ssp(self.ms1(sj)))
317
+ w = self.fc(r) * self.mv(self.rbf(r))
318
+ v_, s_, r_ = torch.chunk(phi * w, 3, dim=-1)
319
+
320
+ ds_update = s_
321
+ dv_update = vj * v_.unsqueeze(-1) + r_.unsqueeze(-1) * (edge_vec / r).unsqueeze(1)
322
+
323
+ ds = scatter(ds_update, edge_index[0], dim=0, dim_size=x.node_fea_s.shape[0], reduce='mean')
324
+ dv = scatter(dv_update, edge_index[0], dim=0, dim_size=x.node_fea_s.shape[0], reduce='mean')
325
+ x = x + PaninnNodeFea(ds, dv)
326
+
327
+ sj = x.node_fea_s[edge_index[1, :]]
328
+ vj = x.node_fea_v[edge_index[1, :]]
329
+ norm = torch.sqrt((vj ** 2).sum(dim=-1) + _eps)
330
+ s = torch.cat([norm, sj], dim=-1)
331
+ sj = self.us2(self.ssp(self.us1(s)))
332
+
333
+ uv = scatter(vj, edge_index[0], dim=0, dim_size=x.node_fea_s.shape[0], reduce='mean')
334
+ norm = torch.sqrt((uv ** 2).sum(dim=-1) + _eps).unsqueeze(-1)
335
+ s_ = scatter(sj, edge_index[0], dim=0, dim_size=x.node_fea_s.shape[0], reduce='mean')
336
+ avv, asv, ass = torch.chunk(s_, 3, dim=-1)
337
+
338
+ ds = ((uv / norm) ** 2).sum(dim=-1) * asv + ass
339
+ dv = uv * avv.unsqueeze(-1)
340
+
341
+ if self.normalization == 'BatchNorm':
342
+ ds = self.bn(ds)
343
+ elif self.normalization == 'LayerNorm':
344
+ ds = self.ln(ds, batch)
345
+ elif self.normalization == 'PairNorm':
346
+ ds = self.pn(ds, batch)
347
+ elif self.normalization == 'InstanceNorm':
348
+ ds = self.instance_norm(ds, batch)
349
+ elif self.normalization == 'GraphNorm':
350
+ ds = self.gn(ds, batch)
351
+ elif self.normalization == 'DiffGroupNorm':
352
+ ds = self.group_norm(ds)
353
+
354
+ x = x + PaninnNodeFea(ds, dv)
355
+
356
+ return x
357
+
358
+
359
+ class MPLayer(nn.Module):
360
+ def __init__(self, in_atom_fea_len, in_edge_fea_len, out_edge_fea_len, if_exp, if_edge_update, normalization,
361
+ atom_update_net, gauss_stop, output_layer=False):
362
+ super(MPLayer, self).__init__()
363
+ if atom_update_net == 'CGConv':
364
+ self.cgconv = CGConv(channels=in_atom_fea_len,
365
+ dim=in_edge_fea_len,
366
+ aggr='add',
367
+ normalization=normalization,
368
+ if_exp=if_exp)
369
+ elif atom_update_net == 'GAT':
370
+ self.cgconv = GAT_Crystal(
371
+ in_features=in_atom_fea_len,
372
+ out_features=in_atom_fea_len,
373
+ edge_dim=in_edge_fea_len,
374
+ heads=3,
375
+ normalization=normalization
376
+ )
377
+ elif atom_update_net == 'PAINN':
378
+ self.cgconv = PAINN(
379
+ in_features=in_atom_fea_len,
380
+ edge_dim=in_edge_fea_len,
381
+ rc=gauss_stop,
382
+ l=64,
383
+ normalization=normalization
384
+ )
385
+
386
+ self.if_edge_update = if_edge_update
387
+ self.atom_update_net = atom_update_net
388
+ if if_edge_update:
389
+ if output_layer:
390
+ self.e_lin = nn.Sequential(nn.Linear(in_edge_fea_len + in_atom_fea_len * 2, 128),
391
+ nn.SiLU(),
392
+ nn.Linear(128, out_edge_fea_len),
393
+ )
394
+ else:
395
+ self.e_lin = nn.Sequential(nn.Linear(in_edge_fea_len + in_atom_fea_len * 2, 128),
396
+ nn.SiLU(),
397
+ nn.Linear(128, out_edge_fea_len),
398
+ nn.SiLU(),
399
+ )
400
+
401
+ def forward(self, atom_fea, edge_idx, edge_fea, batch, distance, edge_vec):
402
+ if self.atom_update_net == 'PAINN':
403
+ atom_fea = self.cgconv(atom_fea, edge_idx, edge_fea, batch, edge_vec)
404
+ atom_fea_s = atom_fea.node_fea_s
405
+ else:
406
+ atom_fea = self.cgconv(atom_fea, edge_idx, edge_fea, batch, distance)
407
+ atom_fea_s = atom_fea
408
+ if self.if_edge_update:
409
+ row, col = edge_idx
410
+ edge_fea = self.e_lin(torch.cat([atom_fea_s[row], atom_fea_s[col], edge_fea], dim=-1))
411
+ return atom_fea, edge_fea
412
+ else:
413
+ return atom_fea
414
+
415
+
416
+ class LCMPLayer(nn.Module):
417
+ def __init__(self, in_atom_fea_len, in_edge_fea_len, out_edge_fea_len, num_l,
418
+ normalization: str = None, bias: bool = True, if_exp: bool = False):
419
+ super(LCMPLayer, self).__init__()
420
+ self.in_atom_fea_len = in_atom_fea_len
421
+ self.normalization = normalization
422
+ self.if_exp = if_exp
423
+
424
+ self.lin_f = nn.Linear(in_atom_fea_len * 2 + in_edge_fea_len, in_atom_fea_len, bias=bias)
425
+ self.lin_s = nn.Linear(in_atom_fea_len * 2 + in_edge_fea_len, in_atom_fea_len, bias=bias)
426
+ self.bn = nn.BatchNorm1d(in_atom_fea_len, track_running_stats=True)
427
+
428
+ self.e_lin = nn.Sequential(nn.Linear(in_edge_fea_len + in_atom_fea_len * 2 - num_l ** 2, 128),
429
+ nn.SiLU(),
430
+ nn.Linear(128, out_edge_fea_len)
431
+ )
432
+ self.reset_parameters()
433
+
434
+ def reset_parameters(self):
435
+ self.lin_f.reset_parameters()
436
+ self.lin_s.reset_parameters()
437
+ if self.normalization == 'BatchNorm':
438
+ self.bn.reset_parameters()
439
+
440
+ def forward(self, atom_fea, edge_fea, sub_atom_idx, sub_edge_idx, sub_edge_ang, sub_index, distance,
441
+ huge_structure, output_final_layer_neuron):
442
+ if huge_structure:
443
+ sub_graph_batch_num = 8
444
+
445
+ sub_graph_num = sub_atom_idx.shape[0]
446
+ sub_graph_batch_size = ceil(sub_graph_num / sub_graph_batch_num)
447
+
448
+ num_edge = edge_fea.shape[0]
449
+ vf_update = torch.zeros((num_edge * 2, self.in_atom_fea_len)).type(torch.get_default_dtype()).to(atom_fea.device)
450
+ for sub_graph_batch_index in range(sub_graph_batch_num):
451
+ if sub_graph_batch_index == sub_graph_batch_num - 1:
452
+ sub_graph_idx = slice(sub_graph_batch_size * sub_graph_batch_index, sub_graph_num)
453
+ else:
454
+ sub_graph_idx = slice(sub_graph_batch_size * sub_graph_batch_index,
455
+ sub_graph_batch_size * (sub_graph_batch_index + 1))
456
+
457
+ sub_atom_idx_batch = sub_atom_idx[sub_graph_idx]
458
+ sub_edge_idx_batch = sub_edge_idx[sub_graph_idx]
459
+ sub_edge_ang_batch = sub_edge_ang[sub_graph_idx]
460
+ sub_index_batch = sub_index[sub_graph_idx]
461
+
462
+ z = torch.cat([atom_fea[sub_atom_idx_batch][:, 0, :], atom_fea[sub_atom_idx_batch][:, 1, :],
463
+ edge_fea[sub_edge_idx_batch], sub_edge_ang_batch], dim=-1)
464
+ out = self.lin_f(z).sigmoid() * F.softplus(self.lin_s(z))
465
+
466
+ if self.if_exp:
467
+ sigma = 3
468
+ n = 2
469
+ out = out * torch.exp(-distance[sub_edge_idx_batch] ** n / sigma ** n / 2).view(-1, 1)
470
+
471
+ vf_update += scatter_add(out, sub_index_batch, dim=0, dim_size=num_edge * 2)
472
+
473
+ if self.normalization == 'BatchNorm':
474
+ vf_update = self.bn(vf_update)
475
+ vf_update = vf_update.reshape(num_edge, 2, -1)
476
+ if output_final_layer_neuron != '':
477
+ final_layer_neuron = torch.cat([vf_update[:, 0, :], vf_update[:, 1, :], edge_fea],
478
+ dim=-1).detach().cpu().numpy()
479
+ np.save(os.path.join(output_final_layer_neuron, 'final_layer_neuron.npy'), final_layer_neuron)
480
+ out = self.e_lin(torch.cat([vf_update[:, 0, :], vf_update[:, 1, :], edge_fea], dim=-1))
481
+
482
+ return out
483
+
484
+ num_edge = edge_fea.shape[0]
485
+ z = torch.cat(
486
+ [atom_fea[sub_atom_idx][:, 0, :], atom_fea[sub_atom_idx][:, 1, :], edge_fea[sub_edge_idx], sub_edge_ang],
487
+ dim=-1)
488
+ out = self.lin_f(z).sigmoid() * F.softplus(self.lin_s(z))
489
+
490
+ if self.if_exp:
491
+ sigma = 3
492
+ n = 2
493
+ out = out * torch.exp(-distance[sub_edge_idx] ** n / sigma ** n / 2).view(-1, 1)
494
+
495
+ out = scatter_add(out, sub_index, dim=0)
496
+ if self.normalization == 'BatchNorm':
497
+ out = self.bn(out)
498
+ out = out.reshape(num_edge, 2, -1)
499
+ if output_final_layer_neuron != '':
500
+ final_layer_neuron = torch.cat([out[:, 0, :], out[:, 1, :], edge_fea], dim=-1).detach().cpu().numpy()
501
+ np.save(os.path.join(output_final_layer_neuron, 'final_layer_neuron.npy'), final_layer_neuron)
502
+ out = self.e_lin(torch.cat([out[:, 0, :], out[:, 1, :], edge_fea], dim=-1))
503
+ return out
504
+
505
+
506
+ class MultipleLinear(nn.Module):
507
+ def __init__(self, num_linear: int, in_fea_len: int, out_fea_len: int, bias: bool = True) -> None:
508
+ super(MultipleLinear, self).__init__()
509
+ self.num_linear = num_linear
510
+ self.out_fea_len = out_fea_len
511
+ self.weight = nn.Parameter(torch.Tensor(num_linear, in_fea_len, out_fea_len))
512
+ if bias:
513
+ self.bias = nn.Parameter(torch.Tensor(num_linear, out_fea_len))
514
+ else:
515
+ self.register_parameter('bias', None)
516
+ # self.ln = LayerNorm(num_linear * out_fea_len)
517
+ # self.gn = GraphNorm(out_fea_len)
518
+ self.reset_parameters()
519
+
520
+ def reset_parameters(self) -> None:
521
+ nn.init.kaiming_uniform_(self.weight, a=sqrt(5))
522
+ if self.bias is not None:
523
+ fan_in, _ = nn.init._calculate_fan_in_and_fan_out(self.weight)
524
+ bound = 1 / sqrt(fan_in)
525
+ nn.init.uniform_(self.bias, -bound, bound)
526
+
527
+ def forward(self, input: torch.Tensor, batch_edge: torch.Tensor) -> torch.Tensor:
528
+ output = torch.matmul(input, self.weight)
529
+
530
+ if self.bias is not None:
531
+ output += self.bias[:, None, :]
532
+ return output
533
+
534
+
535
+ class HGNN(nn.Module):
536
+ def __init__(self, num_species, in_atom_fea_len, in_edge_fea_len, num_orbital,
537
+ distance_expansion, gauss_stop, if_exp, if_MultipleLinear, if_edge_update, if_lcmp,
538
+ normalization, atom_update_net, separate_onsite,
539
+ trainable_gaussians, type_affine, num_l=5):
540
+ super(HGNN, self).__init__()
541
+ self.num_species = num_species
542
+ self.embed = nn.Embedding(num_species + 5, in_atom_fea_len)
543
+
544
+ # pair-type aware affine
545
+ if type_affine:
546
+ self.type_affine = nn.Embedding(
547
+ num_species ** 2, 2,
548
+ _weight=torch.stack([torch.ones(num_species ** 2), torch.zeros(num_species ** 2)], dim=-1)
549
+ )
550
+ else:
551
+ self.type_affine = None
552
+
553
+ if if_edge_update or (if_edge_update is False and if_lcmp is False):
554
+ distance_expansion_len = in_edge_fea_len
555
+ else:
556
+ distance_expansion_len = in_edge_fea_len - num_l ** 2
557
+ if distance_expansion == 'GaussianBasis':
558
+ self.distance_expansion = GaussianBasis(
559
+ 0.0, gauss_stop, distance_expansion_len, trainable=trainable_gaussians
560
+ )
561
+ elif distance_expansion == 'BesselBasis':
562
+ self.distance_expansion = BesselBasisLayer(distance_expansion_len, gauss_stop, envelope_exponent=5)
563
+ elif distance_expansion == 'ExpBernsteinBasis':
564
+ self.distance_expansion = ExpBernsteinBasis(K=distance_expansion_len, gamma=0.5, cutoff=gauss_stop,
565
+ trainable=True)
566
+ else:
567
+ raise ValueError('Unknown distance expansion function: {}'.format(distance_expansion))
568
+
569
+ self.if_MultipleLinear = if_MultipleLinear
570
+ self.if_edge_update = if_edge_update
571
+ self.if_lcmp = if_lcmp
572
+ self.atom_update_net = atom_update_net
573
+ self.separate_onsite = separate_onsite
574
+
575
+ if if_lcmp == True:
576
+ mp_output_edge_fea_len = in_edge_fea_len - num_l ** 2
577
+ else:
578
+ assert if_MultipleLinear == False
579
+ mp_output_edge_fea_len = in_edge_fea_len
580
+
581
+ if if_edge_update == True:
582
+ self.mp1 = MPLayer(in_atom_fea_len, in_edge_fea_len, in_edge_fea_len, if_exp, if_edge_update, normalization,
583
+ atom_update_net, gauss_stop)
584
+ self.mp2 = MPLayer(in_atom_fea_len, in_edge_fea_len, in_edge_fea_len, if_exp, if_edge_update, normalization,
585
+ atom_update_net, gauss_stop)
586
+ self.mp3 = MPLayer(in_atom_fea_len, in_edge_fea_len, in_edge_fea_len, if_exp, if_edge_update, normalization,
587
+ atom_update_net, gauss_stop)
588
+ self.mp4 = MPLayer(in_atom_fea_len, in_edge_fea_len, in_edge_fea_len, if_exp, if_edge_update, normalization,
589
+ atom_update_net, gauss_stop)
590
+ self.mp5 = MPLayer(in_atom_fea_len, in_edge_fea_len, mp_output_edge_fea_len, if_exp, if_edge_update,
591
+ normalization, atom_update_net, gauss_stop)
592
+ else:
593
+ self.mp1 = MPLayer(in_atom_fea_len, distance_expansion_len, None, if_exp, if_edge_update, normalization,
594
+ atom_update_net, gauss_stop)
595
+ self.mp2 = MPLayer(in_atom_fea_len, distance_expansion_len, None, if_exp, if_edge_update, normalization,
596
+ atom_update_net, gauss_stop)
597
+ self.mp3 = MPLayer(in_atom_fea_len, distance_expansion_len, None, if_exp, if_edge_update, normalization,
598
+ atom_update_net, gauss_stop)
599
+ self.mp4 = MPLayer(in_atom_fea_len, distance_expansion_len, None, if_exp, if_edge_update, normalization,
600
+ atom_update_net, gauss_stop)
601
+ self.mp5 = MPLayer(in_atom_fea_len, distance_expansion_len, None, if_exp, if_edge_update, normalization,
602
+ atom_update_net, gauss_stop)
603
+
604
+ if if_lcmp == True:
605
+ if self.if_MultipleLinear == True:
606
+ self.lcmp = LCMPLayer(in_atom_fea_len, in_edge_fea_len, 32, num_l, if_exp=if_exp)
607
+ self.multiple_linear1 = MultipleLinear(num_orbital, 32, 16)
608
+ self.multiple_linear2 = MultipleLinear(num_orbital, 16, 1)
609
+ else:
610
+ self.lcmp = LCMPLayer(in_atom_fea_len, in_edge_fea_len, num_orbital, num_l, if_exp=if_exp)
611
+ else:
612
+ self.mp_output = MPLayer(in_atom_fea_len, in_edge_fea_len, num_orbital, if_exp, if_edge_update=True,
613
+ normalization=normalization, atom_update_net=atom_update_net,
614
+ gauss_stop=gauss_stop, output_layer=True)
615
+
616
+
617
+ def forward(self, atom_attr, edge_idx, edge_attr, batch,
618
+ sub_atom_idx=None, sub_edge_idx=None, sub_edge_ang=None, sub_index=None,
619
+ huge_structure=False, output_final_layer_neuron=''):
620
+ batch_edge = batch[edge_idx[0]]
621
+ atom_fea0 = self.embed(atom_attr)
622
+ distance = edge_attr[:, 0]
623
+ edge_vec = edge_attr[:, 1:4] - edge_attr[:, 4:7]
624
+ if self.type_affine is None:
625
+ edge_fea0 = self.distance_expansion(distance)
626
+ else:
627
+ affine_coeff = self.type_affine(self.num_species * atom_attr[edge_idx[0]] + atom_attr[edge_idx[1]])
628
+ edge_fea0 = self.distance_expansion(distance * affine_coeff[:, 0] + affine_coeff[:, 1])
629
+ if self.atom_update_net == "PAINN":
630
+ atom_fea0 = PaninnNodeFea(atom_fea0)
631
+
632
+ if self.if_edge_update == True:
633
+ atom_fea, edge_fea = self.mp1(atom_fea0, edge_idx, edge_fea0, batch, distance, edge_vec)
634
+ atom_fea, edge_fea = self.mp2(atom_fea, edge_idx, edge_fea, batch, distance, edge_vec)
635
+ atom_fea0, edge_fea0 = atom_fea0 + atom_fea, edge_fea0 + edge_fea
636
+ atom_fea, edge_fea = self.mp3(atom_fea0, edge_idx, edge_fea0, batch, distance, edge_vec)
637
+ atom_fea, edge_fea = self.mp4(atom_fea, edge_idx, edge_fea, batch, distance, edge_vec)
638
+ atom_fea0, edge_fea0 = atom_fea0 + atom_fea, edge_fea0 + edge_fea
639
+ atom_fea, edge_fea = self.mp5(atom_fea0, edge_idx, edge_fea0, batch, distance, edge_vec)
640
+
641
+ if self.if_lcmp == True:
642
+ if self.atom_update_net == 'PAINN':
643
+ atom_fea_s = atom_fea.node_fea_s
644
+ else:
645
+ atom_fea_s = atom_fea
646
+ out = self.lcmp(atom_fea_s, edge_fea, sub_atom_idx, sub_edge_idx, sub_edge_ang, sub_index, distance,
647
+ huge_structure, output_final_layer_neuron)
648
+ else:
649
+ atom_fea, edge_fea = self.mp_output(atom_fea, edge_idx, edge_fea, batch, distance, edge_vec)
650
+ out = edge_fea
651
+ else:
652
+ atom_fea = self.mp1(atom_fea0, edge_idx, edge_fea0, batch, distance, edge_vec)
653
+ atom_fea = self.mp2(atom_fea, edge_idx, edge_fea0, batch, distance, edge_vec)
654
+ atom_fea0 = atom_fea0 + atom_fea
655
+ atom_fea = self.mp3(atom_fea0, edge_idx, edge_fea0, batch, distance, edge_vec)
656
+ atom_fea = self.mp4(atom_fea, edge_idx, edge_fea0, batch, distance, edge_vec)
657
+ atom_fea0 = atom_fea0 + atom_fea
658
+ atom_fea = self.mp5(atom_fea0, edge_idx, edge_fea0, batch, distance, edge_vec)
659
+
660
+ if self.atom_update_net == 'PAINN':
661
+ atom_fea_s = atom_fea.node_fea_s
662
+ else:
663
+ atom_fea_s = atom_fea
664
+ if self.if_lcmp == True:
665
+ out = self.lcmp(atom_fea_s, edge_fea0, sub_atom_idx, sub_edge_idx, sub_edge_ang, sub_index, distance,
666
+ huge_structure, output_final_layer_neuron)
667
+ else:
668
+ atom_fea, edge_fea = self.mp_output(atom_fea, edge_idx, edge_fea0, batch, distance, edge_vec)
669
+ out = edge_fea
670
+
671
+ if self.if_MultipleLinear == True:
672
+ out = self.multiple_linear1(F.silu(out), batch_edge)
673
+ out = self.multiple_linear2(F.silu(out), batch_edge)
674
+ out = out.T
675
+
676
+ return out
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ from .openmx_parse import OijLoad, GetEEiEij, openmx_parse_overlap
2
+ from .get_rc import get_rc
3
+ from .abacus_get_data import abacus_parse
4
+ from .siesta_get_data import siesta_parse
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (394 Bytes). View file
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__pycache__/abacus_get_data.cpython-312.pyc ADDED
Binary file (23 kB). View file
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__pycache__/get_rc.cpython-312.pyc ADDED
Binary file (11.2 kB). View file
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__pycache__/openmx_parse.cpython-312.pyc ADDED
Binary file (31.5 kB). View file
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/__pycache__/siesta_get_data.cpython-312.pyc ADDED
Binary file (18.7 kB). View file
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/abacus_get_data.py ADDED
@@ -0,0 +1,340 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Script for interface from ABACUS (http://abacus.ustc.edu.cn/) to DeepH-pack
2
+ # Coded by ZC Tang @ Tsinghua Univ. e-mail: az_txycha@126.com
3
+ # Modified by He Li @ Tsinghua Univ. & XY Zhou @ Peking Univ.
4
+ # To use this script, please add 'out_mat_hs2 1' in ABACUS INPUT File
5
+ # Current version is capable of coping with f-orbitals
6
+ # 20220717: Read structure from running_scf.log
7
+ # 20220919: The suffix of the output sub-directories (OUT.suffix) can be set by ["basic"]["abacus_suffix"] keyword in preprocess.ini
8
+ # 20220920: Supporting cartesian coordinates in the log file
9
+ # 20231228: Supporting ABACUS v3.4
10
+
11
+ import os
12
+ import sys
13
+ import json
14
+ import re
15
+
16
+ import numpy as np
17
+ from scipy.sparse import csr_matrix
18
+ from scipy.linalg import block_diag
19
+ import argparse
20
+ import h5py
21
+
22
+
23
+ Bohr2Ang = 0.529177249
24
+ periodic_table = {'Ac': 89, 'Ag': 47, 'Al': 13, 'Am': 95, 'Ar': 18, 'As': 33, 'At': 85, 'Au': 79, 'B': 5, 'Ba': 56,
25
+ 'Be': 4, 'Bi': 83, 'Bk': 97, 'Br': 35, 'C': 6, 'Ca': 20, 'Cd': 48, 'Ce': 58, 'Cf': 98, 'Cl': 17,
26
+ 'Cm': 96, 'Co': 27, 'Cr': 24, 'Cs': 55, 'Cu': 29, 'Dy': 66, 'Er': 68, 'Es': 99, 'Eu': 63, 'F': 9,
27
+ 'Fe': 26, 'Fm': 100, 'Fr': 87, 'Ga': 31, 'Gd': 64, 'Ge': 32, 'H': 1, 'He': 2, 'Hf': 72, 'Hg': 80,
28
+ 'Ho': 67, 'I': 53, 'In': 49, 'Ir': 77, 'K': 19, 'Kr': 36, 'La': 57, 'Li': 3, 'Lr': 103, 'Lu': 71,
29
+ 'Md': 101, 'Mg': 12, 'Mn': 25, 'Mo': 42, 'N': 7, 'Na': 11, 'Nb': 41, 'Nd': 60, 'Ne': 10, 'Ni': 28,
30
+ 'No': 102, 'Np': 93, 'O': 8, 'Os': 76, 'P': 15, 'Pa': 91, 'Pb': 82, 'Pd': 46, 'Pm': 61, 'Po': 84,
31
+ 'Pr': 59, 'Pt': 78, 'Pu': 94, 'Ra': 88, 'Rb': 37, 'Re': 75, 'Rh': 45, 'Rn': 86, 'Ru': 44, 'S': 16,
32
+ 'Sb': 51, 'Sc': 21, 'Se': 34, 'Si': 14, 'Sm': 62, 'Sn': 50, 'Sr': 38, 'Ta': 73, 'Tb': 65, 'Tc': 43,
33
+ 'Te': 52, 'Th': 90, 'Ti': 22, 'Tl': 81, 'Tm': 69, 'U': 92, 'V': 23, 'W': 74, 'Xe': 54, 'Y': 39,
34
+ 'Yb': 70, 'Zn': 30, 'Zr': 40, 'Rf': 104, 'Db': 105, 'Sg': 106, 'Bh': 107, 'Hs': 108, 'Mt': 109,
35
+ 'Ds': 110, 'Rg': 111, 'Cn': 112, 'Nh': 113, 'Fl': 114, 'Mc': 115, 'Lv': 116, 'Ts': 117, 'Og': 118}
36
+
37
+
38
+ class OrbAbacus2DeepH:
39
+ def __init__(self):
40
+ self.Us_abacus2deeph = {}
41
+ self.Us_abacus2deeph[0] = np.eye(1)
42
+ self.Us_abacus2deeph[1] = np.eye(3)[[1, 2, 0]]
43
+ self.Us_abacus2deeph[2] = np.eye(5)[[0, 3, 4, 1, 2]]
44
+ self.Us_abacus2deeph[3] = np.eye(7)[[0, 1, 2, 3, 4, 5, 6]]
45
+
46
+ minus_dict = {
47
+ 1: [0, 1],
48
+ 2: [3, 4],
49
+ 3: [1, 2, 5, 6],
50
+ }
51
+ for k, v in minus_dict.items():
52
+ self.Us_abacus2deeph[k][v] *= -1
53
+
54
+ def get_U(self, l):
55
+ if l > 3:
56
+ raise NotImplementedError("Only support l = s, p, d, f")
57
+ return self.Us_abacus2deeph[l]
58
+
59
+ def transform(self, mat, l_lefts, l_rights):
60
+ block_lefts = block_diag(*[self.get_U(l_left) for l_left in l_lefts])
61
+ block_rights = block_diag(*[self.get_U(l_right) for l_right in l_rights])
62
+ return block_lefts @ mat @ block_rights.T
63
+
64
+ def abacus_parse(input_path, output_path, data_name, only_S=False, get_r=False):
65
+ input_path = os.path.abspath(input_path)
66
+ output_path = os.path.abspath(output_path)
67
+ os.makedirs(output_path, exist_ok=True)
68
+
69
+ def find_target_line(f, target):
70
+ line = f.readline()
71
+ while line:
72
+ if target in line:
73
+ return line
74
+ line = f.readline()
75
+ return None
76
+ if only_S:
77
+ log_file_name = "running_get_S.log"
78
+ else:
79
+ log_file_name = "running_scf.log"
80
+ with open(os.path.join(input_path, data_name, log_file_name), 'r') as f:
81
+ f.readline()
82
+ line = f.readline()
83
+ # assert "WELCOME TO ABACUS" in line
84
+ assert find_target_line(f, "READING UNITCELL INFORMATION") is not None, 'Cannot find "READING UNITCELL INFORMATION" in log file'
85
+ num_atom_type = int(f.readline().split()[-1])
86
+
87
+ assert find_target_line(f, "lattice constant (Bohr)") is not None
88
+ lattice_constant = float(f.readline().split()[-1]) # unit is Angstrom
89
+
90
+ site_norbits_dict = {}
91
+ orbital_types_dict = {}
92
+ for index_type in range(num_atom_type):
93
+ tmp = find_target_line(f, "READING ATOM TYPE")
94
+ assert tmp is not None, 'Cannot find "ATOM TYPE" in log file'
95
+ assert tmp.split()[-1] == str(index_type + 1)
96
+ if tmp is None:
97
+ raise Exception(f"Cannot find ATOM {index_type} in {log_file_name}")
98
+
99
+ line = f.readline()
100
+ assert "atom label =" in line
101
+ atom_label = line.split()[-1]
102
+ assert atom_label in periodic_table, "Atom label should be in periodic table"
103
+ atom_type = periodic_table[atom_label]
104
+
105
+ current_site_norbits = 0
106
+ current_orbital_types = []
107
+ while True:
108
+ line = f.readline()
109
+ if "number of zeta" in line:
110
+ tmp = line.split()
111
+ L = int(tmp[0][2:-1])
112
+ num_L = int(tmp[-1])
113
+ current_site_norbits += (2 * L + 1) * num_L
114
+ current_orbital_types.extend([L] * num_L)
115
+ else:
116
+ break
117
+ site_norbits_dict[atom_type] = current_site_norbits
118
+ orbital_types_dict[atom_type] = current_orbital_types
119
+
120
+ line = find_target_line(f, "TOTAL ATOM NUMBER")
121
+ assert line is not None, 'Cannot find "TOTAL ATOM NUMBER" in log file'
122
+ nsites = int(line.split()[-1])
123
+
124
+ line = find_target_line(f, " COORDINATES")
125
+ assert line is not None, 'Cannot find "DIRECT COORDINATES" or "CARTESIAN COORDINATES" in log file'
126
+ if "DIRECT" in line:
127
+ coords_type = "direct"
128
+ elif "CARTESIAN" in line:
129
+ coords_type = "cartesian"
130
+ else:
131
+ raise ValueError('Cannot find "DIRECT COORDINATES" or "CARTESIAN COORDINATES" in log file')
132
+
133
+ assert "atom" in f.readline()
134
+ frac_coords = np.zeros((nsites, 3))
135
+ site_norbits = np.zeros(nsites, dtype=int)
136
+ element = np.zeros(nsites, dtype=int)
137
+ for index_site in range(nsites):
138
+ line = f.readline()
139
+ tmp = line.split()
140
+ assert "tau" in tmp[0]
141
+ atom_label = ''.join(re.findall(r'[A-Za-z]', tmp[0][5:]))
142
+ assert atom_label in periodic_table, "Atom label should be in periodic table"
143
+ element[index_site] = periodic_table[atom_label]
144
+ site_norbits[index_site] = site_norbits_dict[element[index_site]]
145
+ frac_coords[index_site, :] = np.array(tmp[1:4])
146
+ norbits = int(np.sum(site_norbits))
147
+ site_norbits_cumsum = np.cumsum(site_norbits)
148
+
149
+ assert find_target_line(f, "Lattice vectors: (Cartesian coordinate: in unit of a_0)") is not None
150
+ lattice = np.zeros((3, 3))
151
+ for index_lat in range(3):
152
+ lattice[index_lat, :] = np.array(f.readline().split())
153
+ if coords_type == "cartesian":
154
+ frac_coords = frac_coords @ np.matrix(lattice).I
155
+ lattice = lattice * lattice_constant
156
+ if only_S:
157
+ spinful = False
158
+ else:
159
+ line = find_target_line(f, "NSPIN")
160
+ assert line is not None, 'Cannot find "NSPIN" in log file'
161
+ if "NSPIN == 1" in line:
162
+ spinful = False
163
+ elif "NSPIN == 4" in line:
164
+ spinful = True
165
+ else:
166
+ raise ValueError(f'{line} is not supported')
167
+ if only_S:
168
+ fermi_level = 0.0
169
+ else:
170
+ with open(os.path.join(input_path, data_name, log_file_name), 'r') as f:
171
+ line = find_target_line(f, "EFERMI")
172
+ assert line is not None, 'Cannot find "EFERMI" in log file'
173
+ assert "eV" in line
174
+ fermi_level = float(line.split()[2])
175
+ assert find_target_line(f, "EFERMI") is None, "There is more than one EFERMI in log file"
176
+
177
+ np.savetxt(os.path.join(output_path, "lat.dat"), np.transpose(lattice))
178
+ np.savetxt(os.path.join(output_path, "rlat.dat"), np.linalg.inv(lattice) * 2 * np.pi)
179
+ cart_coords = frac_coords @ lattice
180
+ np.savetxt(os.path.join(output_path, "site_positions.dat").format(output_path), np.transpose(cart_coords))
181
+ np.savetxt(os.path.join(output_path, "element.dat"), element, fmt='%d')
182
+ info = {'nsites' : nsites, 'isorthogonal': False, 'isspinful': spinful, 'norbits': norbits, 'fermi_level': fermi_level}
183
+ with open('{}/info.json'.format(output_path), 'w') as info_f:
184
+ json.dump(info, info_f)
185
+ with open(os.path.join(output_path, "orbital_types.dat"), 'w') as f:
186
+ for atomic_number in element:
187
+ for index_l, l in enumerate(orbital_types_dict[atomic_number]):
188
+ if index_l == 0:
189
+ f.write(str(l))
190
+ else:
191
+ f.write(f" {l}")
192
+ f.write('\n')
193
+
194
+ U_orbital = OrbAbacus2DeepH()
195
+ def parse_matrix(matrix_path, factor, spinful=False):
196
+ matrix_dict = dict()
197
+ with open(matrix_path, 'r') as f:
198
+ line = f.readline() # read "Matrix Dimension of ..."
199
+ if not "Matrix Dimension of" in line:
200
+ line = f.readline() # ABACUS >= 3.0
201
+ assert "Matrix Dimension of" in line
202
+ f.readline() # read "Matrix number of ..."
203
+ norbits = int(line.split()[-1])
204
+ for line in f:
205
+ line1 = line.split()
206
+ if len(line1) == 0:
207
+ break
208
+ num_element = int(line1[3])
209
+ if num_element != 0:
210
+ R_cur = np.array(line1[:3]).astype(int)
211
+ line2 = f.readline().split()
212
+ line3 = f.readline().split()
213
+ line4 = f.readline().split()
214
+ if not spinful:
215
+ hamiltonian_cur = csr_matrix((np.array(line2).astype(float), np.array(line3).astype(int),
216
+ np.array(line4).astype(int)), shape=(norbits, norbits)).toarray()
217
+ else:
218
+ line2 = np.char.replace(line2, '(', '')
219
+ line2 = np.char.replace(line2, ')', 'j')
220
+ line2 = np.char.replace(line2, ',', '+')
221
+ line2 = np.char.replace(line2, '+-', '-')
222
+ hamiltonian_cur = csr_matrix((np.array(line2).astype(np.complex128), np.array(line3).astype(int),
223
+ np.array(line4).astype(int)), shape=(norbits, norbits)).toarray()
224
+ for index_site_i in range(nsites):
225
+ for index_site_j in range(nsites):
226
+ key_str = f"[{R_cur[0]}, {R_cur[1]}, {R_cur[2]}, {index_site_i + 1}, {index_site_j + 1}]"
227
+ mat = hamiltonian_cur[(site_norbits_cumsum[index_site_i]
228
+ - site_norbits[index_site_i]) * (1 + spinful):
229
+ site_norbits_cumsum[index_site_i] * (1 + spinful),
230
+ (site_norbits_cumsum[index_site_j] - site_norbits[index_site_j]) * (1 + spinful):
231
+ site_norbits_cumsum[index_site_j] * (1 + spinful)]
232
+ if abs(mat).max() < 1e-8:
233
+ continue
234
+ if not spinful:
235
+ mat = U_orbital.transform(mat, orbital_types_dict[element[index_site_i]],
236
+ orbital_types_dict[element[index_site_j]])
237
+ else:
238
+ mat = mat.reshape((site_norbits[index_site_i], 2, site_norbits[index_site_j], 2))
239
+ mat = mat.transpose((1, 0, 3, 2)).reshape((2 * site_norbits[index_site_i],
240
+ 2 * site_norbits[index_site_j]))
241
+ mat = U_orbital.transform(mat, orbital_types_dict[element[index_site_i]] * 2,
242
+ orbital_types_dict[element[index_site_j]] * 2)
243
+ matrix_dict[key_str] = mat * factor
244
+ return matrix_dict, norbits
245
+
246
+ if only_S:
247
+ overlap_dict, tmp = parse_matrix(os.path.join(input_path, "SR.csr"), 1)
248
+ assert tmp == norbits
249
+ else:
250
+ hamiltonian_dict, tmp = parse_matrix(
251
+ os.path.join(input_path, data_name, "data-HR-sparse_SPIN0.csr"), 13.605698, # Ryd2eV
252
+ spinful=spinful)
253
+ assert tmp == norbits * (1 + spinful)
254
+ overlap_dict, tmp = parse_matrix(os.path.join(input_path, data_name, "data-SR-sparse_SPIN0.csr"), 1,
255
+ spinful=spinful)
256
+ assert tmp == norbits * (1 + spinful)
257
+ if spinful:
258
+ overlap_dict_spinless = {}
259
+ for k, v in overlap_dict.items():
260
+ overlap_dict_spinless[k] = v[:v.shape[0] // 2, :v.shape[1] // 2].real
261
+ overlap_dict_spinless, overlap_dict = overlap_dict, overlap_dict_spinless
262
+
263
+ if not only_S:
264
+ with h5py.File(os.path.join(output_path, "hamiltonians.h5"), 'w') as fid:
265
+ for key_str, value in hamiltonian_dict.items():
266
+ fid[key_str] = value
267
+ with h5py.File(os.path.join(output_path, "overlaps.h5"), 'w') as fid:
268
+ for key_str, value in overlap_dict.items():
269
+ fid[key_str] = value
270
+ if get_r:
271
+ def parse_r_matrix(matrix_path, factor):
272
+ matrix_dict = dict()
273
+ with open(matrix_path, 'r') as f:
274
+ line = f.readline();
275
+ norbits = int(line.split()[-1])
276
+ for line in f:
277
+ line1 = line.split()
278
+ if len(line1) == 0:
279
+ break
280
+ assert len(line1) > 3
281
+ R_cur = np.array(line1[:3]).astype(int)
282
+ mat_cur = np.zeros((3, norbits * norbits))
283
+ for line_index in range(norbits * norbits):
284
+ line_mat = f.readline().split()
285
+ assert len(line_mat) == 3
286
+ mat_cur[:, line_index] = np.array(line_mat)
287
+ mat_cur = mat_cur.reshape((3, norbits, norbits))
288
+
289
+ for index_site_i in range(nsites):
290
+ for index_site_j in range(nsites):
291
+ for direction in range(3):
292
+ key_str = f"[{R_cur[0]}, {R_cur[1]}, {R_cur[2]}, {index_site_i + 1}, {index_site_j + 1}, {direction + 1}]"
293
+ mat = mat_cur[direction, site_norbits_cumsum[index_site_i]
294
+ - site_norbits[index_site_i]:site_norbits_cumsum[index_site_i],
295
+ site_norbits_cumsum[index_site_j]
296
+ - site_norbits[index_site_j]:site_norbits_cumsum[index_site_j]]
297
+ if abs(mat).max() < 1e-8:
298
+ continue
299
+ mat = U_orbital.transform(mat, orbital_types_dict[element[index_site_i]],
300
+ orbital_types_dict[element[index_site_j]])
301
+ matrix_dict[key_str] = mat * factor
302
+ return matrix_dict, norbits
303
+ position_dict, tmp = parse_r_matrix(os.path.join(input_path, data_name, "data-rR-tr_SPIN1"), 0.529177249) # Bohr2Ang
304
+ assert tmp == norbits
305
+
306
+ with h5py.File(os.path.join(output_path, "positions.h5"), 'w') as fid:
307
+ for key_str, value in position_dict.items():
308
+ fid[key_str] = value
309
+
310
+
311
+ if __name__ == '__main__':
312
+ parser = argparse.ArgumentParser(description='Predict Hamiltonian')
313
+ parser.add_argument(
314
+ '-i','--input_dir', type=str, default='./',
315
+ help='path of output subdirectory'
316
+ )
317
+ parser.add_argument(
318
+ '-o','--output_dir', type=str, default='./',
319
+ help='path of output .h5 and .dat'
320
+ )
321
+ parser.add_argument(
322
+ '-a','--abacus_suffix', type=str, default='ABACUS',
323
+ help='suffix of output subdirectory'
324
+ )
325
+ parser.add_argument(
326
+ '-S','--only_S', type=int, default=0
327
+ )
328
+ parser.add_argument(
329
+ '-g','--get_r', type=int, default=0
330
+ )
331
+ args = parser.parse_args()
332
+
333
+ input_path = args.input_dir
334
+ output_path = args.output_dir
335
+ data_name = "OUT." + args.abacus_suffix
336
+ only_S = bool(args.only_S)
337
+ get_r = bool(args.get_r)
338
+ print("only_S: {}".format(only_S))
339
+ print("get_r: {}".format(get_r))
340
+ abacus_parse(input_path, output_path, data_name, only_S, get_r)
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/aims_get_data.jl ADDED
@@ -0,0 +1,477 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ using JSON
2
+ using HDF5
3
+ using LinearAlgebra
4
+ using DelimitedFiles
5
+ using StaticArrays
6
+ using ArgParse
7
+
8
+ function parse_commandline()
9
+ s = ArgParseSettings()
10
+ @add_arg_table! s begin
11
+ "--input_dir", "-i"
12
+ help = "NoTB.dat, basis-indices.out, geometry.in"
13
+ arg_type = String
14
+ default = "./"
15
+ "--output_dir", "-o"
16
+ help = ""
17
+ arg_type = String
18
+ default = "./output"
19
+ "--save_overlap", "-s"
20
+ help = ""
21
+ arg_type = Bool
22
+ default = false
23
+ "--save_position", "-p"
24
+ help = ""
25
+ arg_type = Bool
26
+ default = false
27
+ end
28
+ return parse_args(s)
29
+ end
30
+ parsed_args = parse_commandline()
31
+
32
+ input_dir = abspath(parsed_args["input_dir"])
33
+ output_dir = abspath(parsed_args["output_dir"])
34
+
35
+ @assert isfile(joinpath(input_dir, "NoTB.dat"))
36
+ @assert isfile(joinpath(input_dir, "basis-indices.out"))
37
+ @assert isfile(joinpath(input_dir, "geometry.in"))
38
+
39
+ # @info string("get data from: ", input_dir)
40
+ periodic_table = JSON.parsefile(joinpath(@__DIR__, "periodic_table.json"))
41
+ mkpath(output_dir)
42
+
43
+ # The function parse_openmx below is come from "https://github.com/HopTB/HopTB.jl"
44
+ f = open(joinpath(input_dir, "NoTB.dat"))
45
+ # number of basis
46
+ @assert occursin("n_basis", readline(f)) # start
47
+ norbits = parse(Int64, readline(f))
48
+ @assert occursin("end", readline(f)) # end
49
+ @assert occursin("n_ham", readline(f)) # start
50
+ nhams = parse(Int64, readline(f))
51
+ @assert occursin("end", readline(f)) # end
52
+ @assert occursin("n_cell", readline(f)) # start
53
+ ncells = parse(Int64, readline(f))
54
+ @assert occursin("end", readline(f)) # end
55
+ # lattice vector
56
+ @assert occursin("lattice_vector", readline(f)) # start
57
+ lat = Matrix{Float64}(I, 3, 3)
58
+ for i in 1:3
59
+ lat[:, i] = map(x->parse(Float64, x), split(readline(f)))
60
+ end
61
+ @assert occursin("end", readline(f)) # end
62
+ # hamiltonian
63
+ @assert occursin("hamiltonian", readline(f)) # start
64
+ hamiltonian = zeros(nhams)
65
+ i = 1
66
+ while true
67
+ global i
68
+ @assert !eof(f)
69
+ ln = split(readline(f))
70
+ if occursin("end", ln[1]) break end
71
+ hamiltonian[i:i + length(ln) - 1] = map(x->parse(Float64, x), ln)
72
+ i += length(ln)
73
+ end
74
+ # overlaps
75
+ @assert occursin("overlap", readline(f)) # start
76
+ overlaps = zeros(nhams)
77
+ i = 1
78
+ while true
79
+ global i
80
+ @assert !eof(f)
81
+ ln = split(readline(f))
82
+ if occursin("end", ln[1]) break end
83
+ overlaps[i:i + length(ln) - 1] = map(x->parse(Float64, x), ln)
84
+ i += length(ln)
85
+ end
86
+ # index hamiltonian
87
+ @assert occursin("index_hamiltonian", readline(f)) # start
88
+ indexhamiltonian = zeros(Int64, ncells * norbits, 4)
89
+ i = 1
90
+ while true
91
+ global i
92
+ @assert !eof(f)
93
+ ln = split(readline(f))
94
+ if occursin("end", ln[1]) break end
95
+ indexhamiltonian[i, :] = map(x->parse(Int64, x), ln)
96
+ i += 1
97
+ end
98
+ # cell index
99
+ @assert occursin("cell_index", readline(f)) # start
100
+ cellindex = zeros(Int64, ncells, 3)
101
+ i = 1
102
+ while true
103
+ global i
104
+ @assert !eof(f)
105
+ ln = split(readline(f))
106
+ if occursin("end", ln[1]) break end
107
+ if i <= ncells
108
+ cellindex[i, :] = map(x->parse(Int64, x), ln)
109
+ end
110
+ i += 1
111
+ end
112
+ # column index hamiltonian
113
+ @assert occursin("column_index_hamiltonian", readline(f)) # start
114
+ columnindexhamiltonian = zeros(Int64, nhams)
115
+ i = 1
116
+ while true
117
+ global i
118
+ @assert !eof(f)
119
+ ln = split(readline(f))
120
+ if occursin("end", ln[1]) break end
121
+ columnindexhamiltonian[i:i + length(ln) - 1] = map(x->parse(Int64, x), ln)
122
+ i += length(ln)
123
+ end
124
+ # positions
125
+ positions = zeros(nhams, 3)
126
+ for dir in 1:3
127
+ positionsdir = zeros(nhams)
128
+ @assert occursin("position", readline(f)) # start
129
+ readline(f) # skip direction
130
+ i = 1
131
+ while true
132
+ @assert !eof(f)
133
+ ln = split(readline(f))
134
+ if occursin("end", ln[1]) break end
135
+ positionsdir[i:i + length(ln) - 1] = map(x->parse(Float64, x), ln)
136
+ i += length(ln)
137
+ end
138
+ positions[:, dir] = positionsdir
139
+ end
140
+ if !eof(f)
141
+ spinful = true
142
+ soc_matrix = zeros(nhams, 3)
143
+ for dir in 1:3
144
+ socdir = zeros(nhams)
145
+ @assert occursin("soc_matrix", readline(f)) # start
146
+ readline(f) # skip direction
147
+ i = 1
148
+ while true
149
+ @assert !eof(f)
150
+ ln = split(readline(f))
151
+ if occursin("end", ln[1]) break end
152
+ socdir[i:i + length(ln) - 1] = map(x->parse(Float64, x), ln)
153
+ i += length(ln)
154
+ end
155
+ soc_matrix[:, dir] = socdir
156
+ end
157
+ else
158
+ spinful = false
159
+ end
160
+ close(f)
161
+
162
+ orbital_types = Array{Array{Int64,1},1}(undef, 0)
163
+ basis_dir = joinpath(input_dir, "basis-indices.out")
164
+ @assert ispath(basis_dir)
165
+ f = open(basis_dir)
166
+ readline(f)
167
+ @assert split(readline(f))[1] == "fn."
168
+ basis_indices = zeros(Int64, norbits, 4)
169
+ for index_orbit in 1:norbits
170
+ line = map(x->parse(Int64, x), split(readline(f))[[1, 3, 4, 5, 6]])
171
+ @assert line[1] == index_orbit
172
+ basis_indices[index_orbit, :] = line[2:5]
173
+ # basis_indices: 1 ia, 2 n, 3 l, 4 m
174
+ if size(orbital_types, 1) < line[2]
175
+ orbital_type = Array{Int64,1}(undef, 0)
176
+ push!(orbital_types, orbital_type)
177
+ end
178
+ if line[4] == line[5]
179
+ push!(orbital_types[line[2]], line[4])
180
+ end
181
+ end
182
+ nsites = size(orbital_types, 1)
183
+ site_norbits = (x->sum(x .* 2 .+ 1)).(orbital_types) * (1 + spinful)
184
+ @assert norbits == sum(site_norbits)
185
+ site_norbits_cumsum = cumsum(site_norbits)
186
+ site_indices = zeros(Int64, norbits)
187
+ for index_site in 1:nsites
188
+ if index_site == 1
189
+ site_indices[1:site_norbits_cumsum[index_site]] .= index_site
190
+ else
191
+ site_indices[site_norbits_cumsum[index_site - 1] + 1:site_norbits_cumsum[index_site]] .= index_site
192
+ end
193
+ end
194
+ close(f)
195
+
196
+ f = open(joinpath(input_dir, "geometry.in"))
197
+ # atom_frac_pos = zeros(Float64, 3, nsites)
198
+ element = Array{Int64,1}(undef, 0)
199
+ index_atom = 0
200
+ while !eof(f)
201
+ line = split(readline(f))
202
+ if size(line, 1) > 0 && line[1] == "atom_frac"
203
+ global index_atom
204
+ index_atom += 1
205
+ # atom_frac_pos[:, index_atom] = map(x->parse(Float64, x), line[[2, 3, 4]])
206
+ push!(element, periodic_table[line[5]]["Atomic no"])
207
+ end
208
+ end
209
+ @assert index_atom == nsites
210
+ # site_positions = lat * atom_frac_pos
211
+ close(f)
212
+
213
+ @info string("spinful: ", spinful)
214
+ # write to file
215
+ site_positions = fill(NaN, (3, nsites))
216
+ overlaps_dict = Dict{Array{Int64, 1}, Array{Float64, 2}}()
217
+ positions_dict = Dict{Array{Int64, 1}, Array{Float64, 2}}()
218
+ R_list = Set{Vector{Int64}}()
219
+ if spinful
220
+ hamiltonians_dict = Dict{Array{Int64, 1}, Array{Complex{Float64}, 2}}()
221
+ @error "spinful not implemented yet"
222
+ σx = [0 1; 1 0]
223
+ σy = [0 -im; im 0]
224
+ σz = [1 0; 0 -1]
225
+ σ0 = [1 0; 0 1]
226
+ nm = TBModel{ComplexF64}(2*norbits, lat, isorthogonal=false)
227
+ # convention here is first half up (spin=0); second half down (spin=1).
228
+ for i in 1:size(indexhamiltonian, 1)
229
+ for j in indexhamiltonian[i, 3]:indexhamiltonian[i, 4]
230
+ for nspin in 0:1
231
+ for mspin in 0:1
232
+ sethopping!(nm,
233
+ cellindex[indexhamiltonian[i, 1], :],
234
+ columnindexhamiltonian[j] + norbits * nspin,
235
+ indexhamiltonian[i, 2] + norbits * mspin,
236
+ σ0[nspin + 1, mspin + 1] * hamiltonian[j] -
237
+ (σx[nspin + 1, mspin + 1] * soc_matrix[j, 1] +
238
+ σy[nspin + 1, mspin + 1] * soc_matrix[j, 2] +
239
+ σz[nspin + 1, mspin + 1] * soc_matrix[j, 3]) * im)
240
+ setoverlap!(nm,
241
+ cellindex[indexhamiltonian[i, 1], :],
242
+ columnindexhamiltonian[j] + norbits * nspin,
243
+ indexhamiltonian[i, 2] + norbits * mspin,
244
+ σ0[nspin + 1, mspin + 1] * overlaps[j])
245
+ end
246
+ end
247
+ end
248
+ end
249
+ for i in 1:size(indexhamiltonian, 1)
250
+ for j in indexhamiltonian[i, 3]:indexhamiltonian[i, 4]
251
+ for nspin in 0:1
252
+ for mspin in 0:1
253
+ for dir in 1:3
254
+ setposition!(nm,
255
+ cellindex[indexhamiltonian[i, 1], :],
256
+ columnindexhamiltonian[j] + norbits * nspin,
257
+ indexhamiltonian[i, 2] + norbits * mspin,
258
+ dir,
259
+ σ0[nspin + 1, mspin + 1] * positions[j, dir])
260
+ end
261
+ end
262
+ end
263
+ end
264
+ end
265
+ return nm
266
+ else
267
+ hamiltonians_dict = Dict{Array{Int64, 1}, Array{Float64, 2}}()
268
+
269
+ for i in 1:size(indexhamiltonian, 1)
270
+ for j in indexhamiltonian[i, 3]:indexhamiltonian[i, 4]
271
+ R = cellindex[indexhamiltonian[i, 1], :]
272
+ push!(R_list, SVector{3, Int64}(R))
273
+ orbital_i_whole = columnindexhamiltonian[j]
274
+ orbital_j_whole = indexhamiltonian[i, 2]
275
+ site_i = site_indices[orbital_i_whole]
276
+ site_j = site_indices[orbital_j_whole]
277
+ block_matrix_i = orbital_i_whole - site_norbits_cumsum[site_i] + site_norbits[site_i]
278
+ block_matrix_j = orbital_j_whole - site_norbits_cumsum[site_j] + site_norbits[site_j]
279
+ key = cat(dims=1, R, site_i, site_j)
280
+ key_inv = cat(dims=1, -R, site_j, site_i)
281
+
282
+ mi = 0
283
+ mj = 0
284
+ # p-orbital
285
+ if basis_indices[orbital_i_whole, 3] == 1
286
+ if basis_indices[orbital_i_whole, 4] == -1
287
+ block_matrix_i += 1
288
+ mi = 0
289
+ elseif basis_indices[orbital_i_whole, 4] == 0
290
+ block_matrix_i += 1
291
+ mi = 0
292
+ elseif basis_indices[orbital_i_whole, 4] == 1
293
+ block_matrix_i += -2
294
+ mi = 1
295
+ end
296
+ end
297
+ if basis_indices[orbital_j_whole, 3] == 1
298
+ if basis_indices[orbital_j_whole, 4] == -1
299
+ block_matrix_j += 1
300
+ mj = 0
301
+ elseif basis_indices[orbital_j_whole, 4] == 0
302
+ block_matrix_j += 1
303
+ mj = 0
304
+ elseif basis_indices[orbital_j_whole, 4] == 1
305
+ block_matrix_j += -2
306
+ mj = 1
307
+ end
308
+ end
309
+ # d-orbital
310
+ if basis_indices[orbital_i_whole, 3] == 2
311
+ if basis_indices[orbital_i_whole, 4] == -2
312
+ block_matrix_i += 2
313
+ mi = 0
314
+ elseif basis_indices[orbital_i_whole, 4] == -1
315
+ block_matrix_i += 3
316
+ mi = 0
317
+ elseif basis_indices[orbital_i_whole, 4] == 0
318
+ block_matrix_i += -2
319
+ mi = 0
320
+ elseif basis_indices[orbital_i_whole, 4] == 1
321
+ block_matrix_i += 0
322
+ mi = 1
323
+ elseif basis_indices[orbital_i_whole, 4] == 2
324
+ block_matrix_i += -3
325
+ mi = 0
326
+ end
327
+ end
328
+ if basis_indices[orbital_j_whole, 3] == 2
329
+ if basis_indices[orbital_j_whole, 4] == -2
330
+ block_matrix_j += 2
331
+ mj = 0
332
+ elseif basis_indices[orbital_j_whole, 4] == -1
333
+ block_matrix_j += 3
334
+ mj = 0
335
+ elseif basis_indices[orbital_j_whole, 4] == 0
336
+ block_matrix_j += -2
337
+ mj = 0
338
+ elseif basis_indices[orbital_j_whole, 4] == 1
339
+ block_matrix_j += 0
340
+ mj = 1
341
+ elseif basis_indices[orbital_j_whole, 4] == 2
342
+ block_matrix_j += -3
343
+ mj = 0
344
+ end
345
+ end
346
+ # f-orbital
347
+ if basis_indices[orbital_i_whole, 3] == 3
348
+ if basis_indices[orbital_i_whole, 4] == -3
349
+ block_matrix_i += 6
350
+ mi = 0
351
+ elseif basis_indices[orbital_i_whole, 4] == -2
352
+ block_matrix_i += 3
353
+ mi = 0
354
+ elseif basis_indices[orbital_i_whole, 4] == -1
355
+ block_matrix_i += 0
356
+ mi = 0
357
+ elseif basis_indices[orbital_i_whole, 4] == 0
358
+ block_matrix_i += -3
359
+ mi = 0
360
+ elseif basis_indices[orbital_i_whole, 4] == 1
361
+ block_matrix_i += -3
362
+ mi = 1
363
+ elseif basis_indices[orbital_i_whole, 4] == 2
364
+ block_matrix_i += -2
365
+ mi = 0
366
+ elseif basis_indices[orbital_i_whole, 4] == 3
367
+ block_matrix_i += -1
368
+ mi = 1
369
+ end
370
+ end
371
+ if basis_indices[orbital_j_whole, 3] == 3
372
+ if basis_indices[orbital_j_whole, 4] == -3
373
+ block_matrix_j += 6
374
+ mj = 0
375
+ elseif basis_indices[orbital_j_whole, 4] == -2
376
+ block_matrix_j += 3
377
+ mj = 0
378
+ elseif basis_indices[orbital_j_whole, 4] == -1
379
+ block_matrix_j += 0
380
+ mj = 0
381
+ elseif basis_indices[orbital_j_whole, 4] == 0
382
+ block_matrix_j += -3
383
+ mj = 0
384
+ elseif basis_indices[orbital_j_whole, 4] == 1
385
+ block_matrix_j += -3
386
+ mj = 1
387
+ elseif basis_indices[orbital_j_whole, 4] == 2
388
+ block_matrix_j += -2
389
+ mj = 0
390
+ elseif basis_indices[orbital_j_whole, 4] == 3
391
+ block_matrix_j += -1
392
+ mj = 1
393
+ end
394
+ end
395
+ if (basis_indices[orbital_i_whole, 3] > 3) || (basis_indices[orbital_j_whole, 3] > 3)
396
+ @error("The case of l>3 is not implemented")
397
+ end
398
+
399
+ if !(key ∈ keys(hamiltonians_dict))
400
+ # overlaps_dict[key] = fill(convert(Float64, NaN), (site_norbits[site_i], site_norbits[site_j]))
401
+ overlaps_dict[key] = zeros(Float64, site_norbits[site_i], site_norbits[site_j])
402
+ hamiltonians_dict[key] = zeros(Float64, site_norbits[site_i], site_norbits[site_j])
403
+ for direction in 1:3
404
+ positions_dict[cat(dims=1, key, direction)] = zeros(Float64, site_norbits[site_i], site_norbits[site_j])
405
+ end
406
+ end
407
+ if !(key_inv ∈ keys(hamiltonians_dict))
408
+ overlaps_dict[key_inv] = zeros(Float64, site_norbits[site_j], site_norbits[site_i])
409
+ hamiltonians_dict[key_inv] = zeros(Float64, site_norbits[site_j], site_norbits[site_i])
410
+ for direction in 1:3
411
+ positions_dict[cat(dims=1, key_inv, direction)] = zeros(Float64, site_norbits[site_j], site_norbits[site_i])
412
+ end
413
+ end
414
+ overlaps_dict[key][block_matrix_i, block_matrix_j] = overlaps[j] * (-1) ^ (mi + mj)
415
+ hamiltonians_dict[key][block_matrix_i, block_matrix_j] = hamiltonian[j] * (-1) ^ (mi + mj)
416
+ for direction in 1:3
417
+ positions_dict[cat(dims=1, key, direction)][block_matrix_i, block_matrix_j] = positions[j, direction] * (-1) ^ (mi + mj)
418
+ end
419
+
420
+ overlaps_dict[key_inv][block_matrix_j, block_matrix_i] = overlaps[j] * (-1) ^ (mi + mj)
421
+ hamiltonians_dict[key_inv][block_matrix_j, block_matrix_i] = hamiltonian[j] * (-1) ^ (mi + mj)
422
+ for direction in 1:3
423
+ positions_dict[cat(dims=1, key_inv, direction)][block_matrix_j, block_matrix_i] = positions[j, direction] * (-1) ^ (mi + mj)
424
+ if (R == [0, 0, 0]) && (block_matrix_i == block_matrix_j) && isnan(site_positions[direction, site_i])
425
+ site_positions[direction, site_i] = positions[j, direction]
426
+ end
427
+ end
428
+ end
429
+ end
430
+ end
431
+
432
+ if parsed_args["save_overlap"]
433
+ h5open(joinpath(output_dir, "overlaps.h5"), "w") do fid
434
+ for (key, overlap) in overlaps_dict
435
+ write(fid, string(key), permutedims(overlap))
436
+ end
437
+ end
438
+ end
439
+ h5open(joinpath(output_dir, "hamiltonians.h5"), "w") do fid
440
+ for (key, hamiltonian) in hamiltonians_dict
441
+ write(fid, string(key), permutedims(hamiltonian)) # npz似乎为julia专门做了个转置而h5没有做
442
+ end
443
+ end
444
+ if parsed_args["save_position"]
445
+ h5open(joinpath(output_dir, "positions.h5"), "w") do fid
446
+ for (key, position) in positions_dict
447
+ write(fid, string(key), permutedims(position)) # npz似乎为julia专门做了个转置而h5没有做
448
+ end
449
+ end
450
+ end
451
+
452
+ open(joinpath(output_dir, "orbital_types.dat"), "w") do f
453
+ writedlm(f, orbital_types)
454
+ end
455
+ open(joinpath(output_dir, "lat.dat"), "w") do f
456
+ writedlm(f, lat)
457
+ end
458
+ rlat = 2pi * inv(lat)'
459
+ open(joinpath(output_dir, "rlat.dat"), "w") do f
460
+ writedlm(f, rlat)
461
+ end
462
+ open(joinpath(output_dir, "site_positions.dat"), "w") do f
463
+ writedlm(f, site_positions)
464
+ end
465
+ R_list = collect(R_list)
466
+ open(joinpath(output_dir, "R_list.dat"), "w") do f
467
+ writedlm(f, R_list)
468
+ end
469
+ info_dict = Dict(
470
+ "isspinful" => spinful
471
+ )
472
+ open(joinpath(output_dir, "info.json"), "w") do f
473
+ write(f, json(info_dict, 4))
474
+ end
475
+ open(joinpath(output_dir, "element.dat"), "w") do f
476
+ writedlm(f, element)
477
+ end
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/get_rc.py ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+
4
+ import h5py
5
+ import numpy as np
6
+ import torch
7
+
8
+
9
+ class Neighbours:
10
+ def __init__(self):
11
+ self.Rs = []
12
+ self.dists = []
13
+ self.eijs = []
14
+ self.indices = []
15
+
16
+ def __str__(self):
17
+ return 'Rs: {}\ndists: {}\neijs: {}\nindices: {}'.format(
18
+ self.Rs, self.dists, self.indices, self.eijs)
19
+
20
+
21
+ def _get_local_coordinate(eij, neighbours_i, gen_rc_idx=False, atom_j=None, atom_j_R=None, r2_rand=False):
22
+ if gen_rc_idx:
23
+ rc_idx = np.full(8, np.nan, dtype=np.int32)
24
+ assert r2_rand is False
25
+ assert atom_j is not None, 'atom_j must be specified when gen_rc_idx is True'
26
+ assert atom_j_R is not None, 'atom_j_R must be specified when gen_rc_idx is True'
27
+ else:
28
+ rc_idx = None
29
+ if r2_rand:
30
+ r2_list = []
31
+
32
+ if not np.allclose(eij.detach(), torch.zeros_like(eij)):
33
+ r1 = eij
34
+ if gen_rc_idx:
35
+ rc_idx[0] = atom_j
36
+ rc_idx[1:4] = atom_j_R
37
+ else:
38
+ r1 = neighbours_i.eijs[1]
39
+ if gen_rc_idx:
40
+ rc_idx[0] = neighbours_i.indices[1]
41
+ rc_idx[1:4] = neighbours_i.Rs[1]
42
+ r2_flag = None
43
+ for r2, r2_index, r2_R in zip(neighbours_i.eijs[1:], neighbours_i.indices[1:], neighbours_i.Rs[1:]):
44
+ if torch.norm(torch.cross(r1, r2)) > 1e-6:
45
+ if gen_rc_idx:
46
+ rc_idx[4] = r2_index
47
+ rc_idx[5:8] = r2_R
48
+ r2_flag = True
49
+ if r2_rand:
50
+ if (len(r2_list) == 0) or (torch.norm(r2_list[0]) + 0.5 > torch.norm(r2)):
51
+ r2_list.append(r2)
52
+ else:
53
+ break
54
+ else:
55
+ break
56
+ assert r2_flag is not None, "There is no linear independent chemical bond in the Rcut range, this may be caused by a too small Rcut or the structure is 1D"
57
+ if r2_rand:
58
+ # print(f"r2 is randomly chosen from {len(r2_list)} candidates")
59
+ r2 = r2_list[np.random.randint(len(r2_list))]
60
+ local_coordinate_1 = r1 / torch.norm(r1)
61
+ local_coordinate_2 = torch.cross(r1, r2) / torch.norm(torch.cross(r1, r2))
62
+ local_coordinate_3 = torch.cross(local_coordinate_1, local_coordinate_2)
63
+ return torch.stack([local_coordinate_1, local_coordinate_2, local_coordinate_3], dim=-1), rc_idx
64
+
65
+
66
+ def get_rc(input_dir, output_dir, radius, r2_rand=False, gen_rc_idx=False, gen_rc_by_idx="", create_from_DFT=True, neighbour_file='overlaps.h5', if_require_grad=False, cart_coords=None):
67
+ if not if_require_grad:
68
+ assert os.path.exists(os.path.join(input_dir, 'site_positions.dat')), 'No site_positions.dat found in {}'.format(input_dir)
69
+ cart_coords = torch.tensor(np.loadtxt(os.path.join(input_dir, 'site_positions.dat')).T)
70
+ else:
71
+ assert cart_coords is not None, 'cart_coords must be provided if "if_require_grad" is True'
72
+ assert os.path.exists(os.path.join(input_dir, 'lat.dat')), 'No lat.dat found in {}'.format(input_dir)
73
+ lattice = torch.tensor(np.loadtxt(os.path.join(input_dir, 'lat.dat')).T, dtype=cart_coords.dtype)
74
+
75
+ rc_dict = {}
76
+ if gen_rc_idx:
77
+ assert r2_rand is False, 'r2_rand must be False when gen_rc_idx is True'
78
+ assert gen_rc_by_idx == "", 'gen_rc_by_idx must be "" when gen_rc_idx is True'
79
+ rc_idx_dict = {}
80
+ neighbours_dict = {}
81
+ if gen_rc_by_idx != "":
82
+ # print(f'get local coordinate using {os.path.join(gen_rc_by_idx, "rc_idx.h5")} from: {input_dir}')
83
+ assert os.path.exists(os.path.join(gen_rc_by_idx, "rc_idx.h5")), 'Atomic indices for constructing rc rc_idx.h5 is not found in {}'.format(gen_rc_by_idx)
84
+ fid_rc_idx = h5py.File(os.path.join(gen_rc_by_idx, "rc_idx.h5"), 'r')
85
+ for key_str, rc_idx in fid_rc_idx.items():
86
+ key = json.loads(key_str)
87
+ # R = torch.tensor([key[0], key[1], key[2]])
88
+ atom_i = key[3] - 1
89
+ cart_coords_i = cart_coords[atom_i]
90
+
91
+ r1 = cart_coords[rc_idx[0]] + torch.tensor(rc_idx[1:4]).type(cart_coords.dtype) @ lattice - cart_coords_i
92
+ r2 = cart_coords[rc_idx[4]] + torch.tensor(rc_idx[5:8]).type(cart_coords.dtype) @ lattice - cart_coords_i
93
+ local_coordinate_1 = r1 / torch.norm(r1)
94
+ local_coordinate_2 = torch.cross(r1, r2) / torch.norm(torch.cross(r1, r2))
95
+ local_coordinate_3 = torch.cross(local_coordinate_1, local_coordinate_2)
96
+
97
+ rc_dict[key_str] = torch.stack([local_coordinate_1, local_coordinate_2, local_coordinate_3], dim=-1)
98
+ fid_rc_idx.close()
99
+ else:
100
+ # print("get local coordinate from:", input_dir)
101
+ if create_from_DFT:
102
+ assert os.path.exists(os.path.join(input_dir, neighbour_file)), 'No {} found in {}'.format(neighbour_file, input_dir)
103
+ fid_OLP = h5py.File(os.path.join(input_dir, neighbour_file), 'r')
104
+ for key_str in fid_OLP.keys():
105
+ key = json.loads(key_str)
106
+ R = torch.tensor([key[0], key[1], key[2]])
107
+ atom_i = key[3] - 1
108
+ atom_j = key[4] - 1
109
+ cart_coords_i = cart_coords[atom_i]
110
+ cart_coords_j = cart_coords[atom_j] + R.type(cart_coords.dtype) @ lattice
111
+ eij = cart_coords_j - cart_coords_i
112
+ dist = torch.norm(eij)
113
+ if radius > 0 and dist > radius:
114
+ continue
115
+ if atom_i not in neighbours_dict:
116
+ neighbours_dict[atom_i] = Neighbours()
117
+ neighbours_dict[atom_i].Rs.append(R)
118
+ neighbours_dict[atom_i].dists.append(dist)
119
+ neighbours_dict[atom_i].eijs.append(eij)
120
+ neighbours_dict[atom_i].indices.append(atom_j)
121
+
122
+ for atom_i, neighbours_i in neighbours_dict.items():
123
+ neighbours_i.Rs = torch.stack(neighbours_i.Rs)
124
+ neighbours_i.dists = torch.tensor(neighbours_i.dists, dtype=cart_coords.dtype)
125
+ neighbours_i.eijs = torch.stack(neighbours_i.eijs)
126
+ neighbours_i.indices = torch.tensor(neighbours_i.indices)
127
+
128
+ neighbours_i.dists, sorted_index = torch.sort(neighbours_i.dists)
129
+ neighbours_i.Rs = neighbours_i.Rs[sorted_index]
130
+ neighbours_i.eijs = neighbours_i.eijs[sorted_index]
131
+ neighbours_i.indices = neighbours_i.indices[sorted_index]
132
+
133
+ assert np.allclose(neighbours_i.eijs[0].detach(), torch.zeros_like(neighbours_i.eijs[0])), 'eijs[0] should be zero'
134
+
135
+ for R, eij, atom_j, atom_j_R in zip(neighbours_i.Rs, neighbours_i.eijs, neighbours_i.indices, neighbours_i.Rs):
136
+ key_str = str(list([*R.tolist(), atom_i + 1, atom_j.item() + 1]))
137
+ if gen_rc_idx:
138
+ rc_dict[key_str], rc_idx_dict[key_str] = _get_local_coordinate(eij, neighbours_i, gen_rc_idx, atom_j, atom_j_R)
139
+ else:
140
+ rc_dict[key_str] = _get_local_coordinate(eij, neighbours_i, r2_rand=r2_rand)[0]
141
+ else:
142
+ raise NotImplementedError
143
+
144
+ if create_from_DFT:
145
+ fid_OLP.close()
146
+
147
+ if if_require_grad:
148
+ return rc_dict
149
+ else:
150
+ if os.path.exists(os.path.join(output_dir, 'rc_julia.h5')):
151
+ rc_old_flag = True
152
+ fid_rc_old = h5py.File(os.path.join(output_dir, 'rc_julia.h5'), 'r')
153
+ else:
154
+ rc_old_flag = False
155
+ fid_rc = h5py.File(os.path.join(output_dir, 'rc.h5'), 'w')
156
+ for k, v in rc_dict.items():
157
+ if rc_old_flag:
158
+ assert np.allclose(v, fid_rc_old[k][...], atol=1e-4), f"{k}, {v}, {fid_rc_old[k][...]}"
159
+ fid_rc[k] = v
160
+ fid_rc.close()
161
+ if gen_rc_idx:
162
+ fid_rc_idx = h5py.File(os.path.join(output_dir, 'rc_idx.h5'), 'w')
163
+ for k, v in rc_idx_dict.items():
164
+ fid_rc_idx[k] = v
165
+ fid_rc_idx.close()
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/openmx_get_data.jl ADDED
@@ -0,0 +1,471 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ using StaticArrays
2
+ using LinearAlgebra
3
+ using HDF5
4
+ using JSON
5
+ using DelimitedFiles
6
+ using Statistics
7
+ using ArgParse
8
+
9
+ function parse_commandline()
10
+ s = ArgParseSettings()
11
+ @add_arg_table! s begin
12
+ "--input_dir", "-i"
13
+ help = ""
14
+ arg_type = String
15
+ default = "./"
16
+ "--output_dir", "-o"
17
+ help = ""
18
+ arg_type = String
19
+ default = "./output"
20
+ "--if_DM", "-d"
21
+ help = ""
22
+ arg_type = Bool
23
+ default = false
24
+ "--save_overlap", "-s"
25
+ help = ""
26
+ arg_type = Bool
27
+ default = false
28
+ end
29
+ return parse_args(s)
30
+ end
31
+ parsed_args = parse_commandline()
32
+
33
+ # @info string("get data from: ", parsed_args["input_dir"])
34
+ periodic_table = JSON.parsefile(joinpath(@__DIR__, "periodic_table.json"))
35
+
36
+ #=
37
+ struct Site_list
38
+ R::Array{StaticArrays.SArray{Tuple{3},Int16,1,3},1}
39
+ site::Array{Int64,1}
40
+ pos::Array{Float64,2}
41
+ end
42
+
43
+ function _cal_neighbour_site(e_ij::Array{Float64,2},Rcut::Float64)
44
+ r_ij = sum(dims=1,e_ij.^2)[1,:]
45
+ p = sortperm(r_ij)
46
+ j_cut = searchsorted(r_ij[p],Rcut^2).stop
47
+ return p[1:j_cut]
48
+ end
49
+
50
+ function cal_neighbour_site(site_positions::Matrix{<:Real}, lat::Matrix{<:Real}, R_list::Union{Vector{SVector{3, Int64}}, Vector{Vector{Int64}}}, nsites::Int64, Rcut::Float64)
51
+ # writed by lihe
52
+ neighbour_site = Array{Site_list,1}([])
53
+ # R_list = collect(keys(tm.hoppings))
54
+ pos_R_list = map(R -> collect(lat * R), R_list)
55
+ pos_j_list = cat(dims=2, map(pos_R -> pos_R .+ site_positions, pos_R_list)...)
56
+ for site_i in 1:nsites
57
+ pos_i = site_positions[:, site_i]
58
+ e_ij = pos_j_list .- pos_i
59
+ p = _cal_neighbour_site(e_ij, Rcut)
60
+ R_ordered = R_list[map(x -> div(x + (nsites - 1), nsites),p)]
61
+ site_ordered = map(x -> mod(x - 1, nsites) + 1,p)
62
+ pos_ordered = e_ij[:,p]
63
+ @assert pos_ordered[:,1] ≈ [0,0,0]
64
+ push!(neighbour_site, Site_list(R_ordered, site_ordered, pos_ordered))
65
+ end
66
+ return neighbour_site
67
+ end
68
+
69
+ function _get_local_coordinate(e_ij::Array{Float64,1},site_list_i::Site_list)
70
+ if e_ij != [0,0,0]
71
+ r1 = e_ij
72
+ else
73
+ r1 = site_list_i.pos[:,2]
74
+ end
75
+ nsites_i = length(site_list_i.R)
76
+ r2 = [0,0,0]
77
+ for j in 1:nsites_i
78
+ r2 = site_list_i.pos[:,j]
79
+ if norm(cross(r1,r2)) != 0
80
+ break
81
+ end
82
+ if j == nsites_i
83
+ for k in 1:3
84
+ r2 = [[1,0,0],[0,1,0],[0,0,1]][k]
85
+ if norm(cross(r1,r2)) != 0
86
+ break
87
+ end
88
+ end
89
+ end
90
+ end
91
+ if r2 == [0,0,0]
92
+ error("there is no linear independent chemical bond in the Rcut range, this may be caused by a too small Rcut or the structure is1D")
93
+ end
94
+ local_coordinate = zeros(Float64,(3,3))
95
+ local_coordinate[:,1] = r1/norm(r1)
96
+
97
+ local_coordinate[:,2] = cross(r1,r2)/norm(cross(r1,r2))
98
+ local_coordinate[:,3] = cross(local_coordinate[:,1],local_coordinate[:,2])
99
+ return local_coordinate
100
+ end
101
+
102
+ function get_local_coordinates(site_positions::Matrix{<:Real}, lat::Matrix{<:Real}, R_list::Vector{SVector{3, Int64}}, Rcut::Float64)::Dict{Array{Int64,1},Array{Float64,2}}
103
+ nsites = size(site_positions, 2)
104
+ neighbour_site = cal_neighbour_site(site_positions, lat, R_list, nsites, Rcut)
105
+ local_coordinates = Dict{Array{Int64,1},Array{Float64,2}}()
106
+ for site_i = 1:nsites
107
+ site_list_i = neighbour_site[site_i]
108
+ nsites_i = length(site_list_i.R)
109
+ for j = 1:nsites_i
110
+ R = site_list_i.R[j]; site_j = site_list_i.site[j]; e_ij = site_list_i.pos[:,j]
111
+ local_coordinate = _get_local_coordinate(e_ij, site_list_i)
112
+ local_coordinates[cat(dims=1, R, site_i, site_j)] = local_coordinate
113
+ end
114
+ end
115
+ return local_coordinates
116
+ end
117
+ =#
118
+
119
+ # The function parse_openmx below is come from "https://github.com/HopTB/HopTB.jl"
120
+ function parse_openmx(filepath::String; return_DM::Bool = false)
121
+ # define some helper functions for mixed structure of OpenMX binary data file.
122
+ function multiread(::Type{T}, f, size)::Vector{T} where T
123
+ ret = Vector{T}(undef, size)
124
+ read!(f, ret);ret
125
+ end
126
+ multiread(f, size) = multiread(Int32, f, size)
127
+
128
+ function read_mixed_matrix(::Type{T}, f, dims::Vector{<:Integer}) where T
129
+ ret::Vector{Vector{T}} = []
130
+ for i = dims; t = Vector{T}(undef, i);read!(f, t);push!(ret, t); end; ret
131
+ end
132
+
133
+ function read_matrix_in_mixed_matrix(::Type{T}, f, spins, atomnum, FNAN, natn, Total_NumOrbs) where T
134
+ ret = Vector{Vector{Vector{Matrix{T}}}}(undef, spins)
135
+ for spin = 1:spins;t_spin = Vector{Vector{Matrix{T}}}(undef, atomnum)
136
+ for ai = 1:atomnum;t_ai = Vector{Matrix{T}}(undef, FNAN[ai])
137
+ for aj_inner = 1:FNAN[ai]
138
+ t = Matrix{T}(undef, Total_NumOrbs[natn[ai][aj_inner]], Total_NumOrbs[ai])
139
+ read!(f, t);t_ai[aj_inner] = t
140
+ end;t_spin[ai] = t_ai
141
+ end;ret[spin] = t_spin
142
+ end;return ret
143
+ end
144
+ read_matrix_in_mixed_matrix(f, spins, atomnum, FNAN, natn, Total_NumOrbs) = read_matrix_in_mixed_matrix(Float64, f, spins, atomnum, FNAN, natn, Total_NumOrbs)
145
+
146
+ read_3d_vecs(::Type{T}, f, num) where T = reshape(multiread(T, f, 4 * num), 4, Int(num))[2:4,:]
147
+ read_3d_vecs(f, num) = read_3d_vecs(Float64, f, num)
148
+ # End of helper functions
149
+
150
+ bound_multiread(T, size) = multiread(T, f, size)
151
+ bound_multiread(size) = multiread(f, size)
152
+ bound_read_mixed_matrix() = read_mixed_matrix(Int32, f, FNAN)
153
+ bound_read_matrix_in_mixed_matrix(spins) = read_matrix_in_mixed_matrix(f, spins, atomnum, FNAN, natn, Total_NumOrbs)
154
+ bound_read_3d_vecs(num) = read_3d_vecs(f, num)
155
+ bound_read_3d_vecs(::Type{T}, num) where T = read_3d_vecs(T, f, num)
156
+ # End of bound helper functions
157
+
158
+ f = open(filepath)
159
+ atomnum, SpinP_switch, Catomnum, Latomnum, Ratomnum, TCpyCell, order_max = bound_multiread(7)
160
+ @assert (SpinP_switch >> 2) == 3 "DeepH-pack only supports OpenMX v3.9. Please check your OpenMX version"
161
+ SpinP_switch &= 0x03
162
+
163
+ atv, atv_ijk = bound_read_3d_vecs.([Float64,Int32], TCpyCell + 1)
164
+
165
+ Total_NumOrbs, FNAN = bound_multiread.([atomnum,atomnum])
166
+ FNAN .+= 1
167
+ natn = bound_read_mixed_matrix()
168
+ ncn = ((x)->x .+ 1).(bound_read_mixed_matrix()) # These is to fix that atv and atv_ijk starts from 0 in original C code.
169
+
170
+ tv, rtv, Gxyz = bound_read_3d_vecs.([3,3,atomnum])
171
+
172
+ Hk = bound_read_matrix_in_mixed_matrix(SpinP_switch + 1)
173
+ iHk = SpinP_switch == 3 ? bound_read_matrix_in_mixed_matrix(3) : nothing
174
+ OLP = bound_read_matrix_in_mixed_matrix(1)[1]
175
+ OLP_r = []
176
+ for dir in 1:3, order in 1:order_max
177
+ t = bound_read_matrix_in_mixed_matrix(1)[1]
178
+ if order == 1 push!(OLP_r, t) end
179
+ end
180
+ OLP_p = bound_read_matrix_in_mixed_matrix(3)
181
+ DM = bound_read_matrix_in_mixed_matrix(SpinP_switch + 1)
182
+ iDM = bound_read_matrix_in_mixed_matrix(2)
183
+ solver = bound_multiread(1)[1]
184
+ chem_p, E_temp = bound_multiread(Float64, 2)
185
+ dipole_moment_core, dipole_moment_background = bound_multiread.(Float64, [3,3])
186
+ Valence_Electrons, Total_SpinS = bound_multiread(Float64, 2)
187
+ dummy_blocks = bound_multiread(1)[1]
188
+ for i in 1:dummy_blocks
189
+ bound_multiread(UInt8, 256)
190
+ end
191
+
192
+ # we suppose that the original output file(.out) was appended to the end of the scfout file.
193
+ function strip1(s::Vector{UInt8})
194
+ startpos = 0
195
+ for i = 1:length(s) + 1
196
+ if i > length(s) || s[i] & 0x80 != 0 || !isspace(Char(s[i] & 0x7f))
197
+ startpos = i
198
+ break
199
+ end
200
+ end
201
+ return s[startpos:end]
202
+ end
203
+ function startswith1(s::Vector{UInt8}, prefix::Vector{UInt8})
204
+ return length(s) >= length(prefix) && s[1:length(prefix)] == prefix
205
+ end
206
+ function isnum(s::Char)
207
+ if s >= '1' && s <= '9'
208
+ return true
209
+ else
210
+ return false
211
+ end
212
+ end
213
+
214
+ function isorb(s::Char)
215
+ if s in ['s','p','d','f']
216
+ return true
217
+ else
218
+ return false
219
+ end
220
+ end
221
+
222
+ function orbital_types_str2num(str::AbstractString)
223
+ orbs = split(str, isnum, keepempty = false)
224
+ nums = map(x->parse(Int, x), split(str, isorb, keepempty = false))
225
+ orb2l = Dict("s" => 0, "p" => 1, "d" => 2, "f" => 3)
226
+ @assert length(orbs) == length(nums)
227
+ orbital_types = Array{Int64,1}(undef, 0)
228
+ for (orb, num) in zip(orbs, nums)
229
+ for i = 1:num
230
+ push!(orbital_types, orb2l[orb])
231
+ end
232
+ end
233
+ return orbital_types
234
+ end
235
+
236
+ function find_target_line(target_line::String)
237
+ target_line_UInt8 = Vector{UInt8}(target_line)
238
+ while !startswith1(strip1(Vector{UInt8}(readline(f))), target_line_UInt8)
239
+ if eof(f)
240
+ error(string(target_line, "not found. Please check if the .out file was appended to the end of .scfout file!"))
241
+ end
242
+ end
243
+ end
244
+
245
+ # @info """get orbital setting of element:orbital_types_element::Dict{String,Array{Int64,1}} "element" => orbital_types"""
246
+ find_target_line("<Definition.of.Atomic.Species")
247
+ orbital_types_element = Dict{String,Array{Int64,1}}([])
248
+ while true
249
+ str = readline(f)
250
+ if str == "Definition.of.Atomic.Species>"
251
+ break
252
+ end
253
+ element = split(str)[1]
254
+ orbital_types_str = split(split(str)[2], "-")[2]
255
+ orbital_types_element[element] = orbital_types_str2num(orbital_types_str)
256
+ end
257
+
258
+
259
+ # @info "get Chemical potential (Hartree)"
260
+ find_target_line("(see also PRB 72, 045121(2005) for the energy contributions)")
261
+ readline(f)
262
+ readline(f)
263
+ readline(f)
264
+ str = split(readline(f))
265
+ @assert "Chemical" == str[1]
266
+ @assert "potential" == str[2]
267
+ @assert "(Hartree)" == str[3]
268
+ ev2Hartree = 0.036749324533634074
269
+ fermi_level = parse(Float64, str[length(str)])/ev2Hartree
270
+
271
+ # @info "get Chemical potential (Hartree)"
272
+ # find_target_line("Eigenvalues (Hartree)")
273
+ # for i = 1:2;@assert readline(f) == "***********************************************************";end
274
+ # readline(f)
275
+ # str = split(readline(f))
276
+ # ev2Hartree = 0.036749324533634074
277
+ # fermi_level = parse(Float64, str[length(str)])/ev2Hartree
278
+
279
+
280
+ # @info "get Fractional coordinates & orbital types:"
281
+ find_target_line("Fractional coordinates of the final structure")
282
+ target_line = Vector{UInt8}("Fractional coordinates of the final structure")
283
+ for i = 1:2;@assert readline(f) == "***********************************************************";end
284
+ @assert readline(f) == ""
285
+ orbital_types = Array{Array{Int64,1},1}(undef, 0) #orbital_types
286
+ element = Array{Int64,1}(undef, 0) #orbital_types
287
+ atom_frac_pos = zeros(3, atomnum) #Fractional coordinates
288
+ for i = 1:atomnum
289
+ str = readline(f)
290
+ element_str = split(str)[2]
291
+ push!(orbital_types, orbital_types_element[element_str])
292
+ m = match(r"^\s*\d+\s+\w+\s+([0-9+-.Ee]+)\s+([0-9+-.Ee]+)\s+([0-9+-.Ee]+)", str)
293
+ push!(element, periodic_table[element_str]["Atomic no"])
294
+ atom_frac_pos[:,i] = ((x)->parse(Float64, x)).(m.captures)
295
+ end
296
+ atom_pos = tv * atom_frac_pos
297
+ close(f)
298
+
299
+ # use the atom_pos to fix
300
+ # TODO: Persuade wangc to accept the following code, which seems hopeless and meaningless.
301
+ """
302
+ for axis = 1:3
303
+ ((x2, y2, z)->((x, y)->x .+= z * y).(x2, y2)).(OLP_r[axis], OLP, atom_pos[axis,:])
304
+ end
305
+ """
306
+ for axis in 1:3,i in 1:atomnum, j in 1:FNAN[i]
307
+ OLP_r[axis][i][j] .+= atom_pos[axis,i] * OLP[i][j]
308
+ end
309
+
310
+ # fix type mismatch
311
+ atv_ijk = Matrix{Int64}(atv_ijk)
312
+
313
+ if return_DM
314
+ return element, atomnum, SpinP_switch, atv, atv_ijk, Total_NumOrbs, FNAN, natn, ncn, tv, Hk, iHk, OLP, OLP_r, orbital_types, fermi_level, atom_pos, DM
315
+ else
316
+ return element, atomnum, SpinP_switch, atv, atv_ijk, Total_NumOrbs, FNAN, natn, ncn, tv, Hk, iHk, OLP, OLP_r, orbital_types, fermi_level, atom_pos, nothing
317
+ end
318
+ end
319
+
320
+ function get_data(filepath_scfout::String, Rcut::Float64; if_DM::Bool = false)
321
+ element, nsites, SpinP_switch, atv, atv_ijk, Total_NumOrbs, FNAN, natn, ncn, lat, Hk, iHk, OLP, OLP_r, orbital_types, fermi_level, site_positions, DM = parse_openmx(filepath_scfout; return_DM=if_DM)
322
+
323
+ for t in [Hk, iHk]
324
+ if t != nothing
325
+ ((x)->((y)->((z)->z .*= 27.2113845).(y)).(x)).(t) # Hartree to eV
326
+ end
327
+ end
328
+ site_positions .*= 0.529177249 # Bohr to Ang
329
+ lat .*= 0.529177249 # Bohr to Ang
330
+
331
+ # get R_list
332
+ R_list = Set{Vector{Int64}}()
333
+ for atom_i in 1:nsites, index_nn_i in 1:FNAN[atom_i]
334
+ atom_j = natn[atom_i][index_nn_i]
335
+ R = atv_ijk[:, ncn[atom_i][index_nn_i]]
336
+ push!(R_list, SVector{3, Int64}(R))
337
+ end
338
+ R_list = collect(R_list)
339
+
340
+ # get neighbour_site
341
+ nsites = size(site_positions, 2)
342
+ # neighbour_site = cal_neighbour_site(site_positions, lat, R_list, nsites, Rcut)
343
+ # local_coordinates = Dict{Array{Int64, 1}, Array{Float64, 2}}()
344
+
345
+ # process hamiltonian
346
+ norbits = sum(Total_NumOrbs)
347
+ overlaps = Dict{Array{Int64, 1}, Array{Float64, 2}}()
348
+ if SpinP_switch == 0
349
+ spinful = false
350
+ hamiltonians = Dict{Array{Int64, 1}, Array{Float64, 2}}()
351
+ if if_DM
352
+ density_matrixs = Dict{Array{Int64, 1}, Array{Float64, 2}}()
353
+ else
354
+ density_matrixs = nothing
355
+ end
356
+ elseif SpinP_switch == 1
357
+ error("Collinear spin is not supported currently")
358
+ elseif SpinP_switch == 3
359
+ @assert if_DM == false
360
+ density_matrixs = nothing
361
+ spinful = true
362
+ for i in 1:length(Hk[4]),j in 1:length(Hk[4][i])
363
+ Hk[4][i][j] += iHk[3][i][j]
364
+ iHk[3][i][j] = -Hk[4][i][j]
365
+ end
366
+ hamiltonians = Dict{Array{Int64, 1}, Array{Complex{Float64}, 2}}()
367
+ else
368
+ error("SpinP_switch is $SpinP_switch, rather than valid values 0, 1 or 3")
369
+ end
370
+
371
+ for site_i in 1:nsites, index_nn_i in 1:FNAN[site_i]
372
+ site_j = natn[site_i][index_nn_i]
373
+ R = atv_ijk[:, ncn[site_i][index_nn_i]]
374
+ e_ij = lat * R + site_positions[:, site_j] - site_positions[:, site_i]
375
+ # if norm(e_ij) > Rcut
376
+ # continue
377
+ # end
378
+ key = cat(dims=1, R, site_i, site_j)
379
+ # site_list_i = neighbour_site[site_i]
380
+ # local_coordinate = _get_local_coordinate(e_ij, site_list_i)
381
+ # local_coordinates[key] = local_coordinate
382
+
383
+ overlap = permutedims(OLP[site_i][index_nn_i])
384
+ overlaps[key] = overlap
385
+ if SpinP_switch == 0
386
+ hamiltonian = permutedims(Hk[1][site_i][index_nn_i])
387
+ hamiltonians[key] = hamiltonian
388
+ if if_DM
389
+ density_matrix = permutedims(DM[1][site_i][index_nn_i])
390
+ density_matrixs[key] = density_matrix
391
+ end
392
+ elseif SpinP_switch == 1
393
+ error("Collinear spin is not supported currently")
394
+ elseif SpinP_switch == 3
395
+ key_inv = cat(dims=1, -R, site_j, site_i)
396
+
397
+ len_i_wo_spin = Total_NumOrbs[site_i]
398
+ len_j_wo_spin = Total_NumOrbs[site_j]
399
+
400
+ if !(key in keys(hamiltonians))
401
+ @assert !(key_inv in keys(hamiltonians))
402
+ hamiltonians[key] = zeros(Complex{Float64}, len_i_wo_spin * 2, len_j_wo_spin * 2)
403
+ hamiltonians[key_inv] = zeros(Complex{Float64}, len_j_wo_spin * 2, len_i_wo_spin * 2)
404
+ end
405
+ for spini in 0:1,spinj in spini:1
406
+ Hk_real, Hk_imag = spini == 0 ? spinj == 0 ? (Hk[1][site_i][index_nn_i], iHk[1][site_i][index_nn_i]) : (Hk[3][site_i][index_nn_i], Hk[4][site_i][index_nn_i]) : spinj == 0 ? (Hk[3][site_i][index_nn_i], iHk[3][site_i][index_nn_i]) : (Hk[2][site_i][index_nn_i], iHk[2][site_i][index_nn_i])
407
+ hamiltonians[key][spini * len_i_wo_spin + 1 : (spini + 1) * len_i_wo_spin, spinj * len_j_wo_spin + 1 : (spinj + 1) * len_j_wo_spin] = permutedims(Hk_real) + im * permutedims(Hk_imag)
408
+ if spini == 0 && spinj == 1
409
+ hamiltonians[key_inv][1 * len_j_wo_spin + 1 : (1 + 1) * len_j_wo_spin, 0 * len_i_wo_spin + 1 : (0 + 1) * len_i_wo_spin] = (permutedims(Hk_real) + im * permutedims(Hk_imag))'
410
+ end
411
+ end
412
+ else
413
+ error("SpinP_switch is $SpinP_switch, rather than valid values 0, 1 or 3")
414
+ end
415
+ end
416
+
417
+ return element, overlaps, density_matrixs, hamiltonians, fermi_level, orbital_types, lat, site_positions, spinful, R_list
418
+ end
419
+
420
+ parsed_args["input_dir"] = abspath(parsed_args["input_dir"])
421
+ mkpath(parsed_args["output_dir"])
422
+ cd(parsed_args["output_dir"])
423
+
424
+ element, overlaps, density_matrixs, hamiltonians, fermi_level, orbital_types, lat, site_positions, spinful, R_list = get_data(joinpath(parsed_args["input_dir"], "openmx.scfout"), -1.0; if_DM=parsed_args["if_DM"])
425
+
426
+ if parsed_args["if_DM"]
427
+ h5open("density_matrixs.h5", "w") do fid
428
+ for (key, density_matrix) in density_matrixs
429
+ write(fid, string(key), permutedims(density_matrix))
430
+ end
431
+ end
432
+ end
433
+ if parsed_args["save_overlap"]
434
+ h5open("overlaps.h5", "w") do fid
435
+ for (key, overlap) in overlaps
436
+ write(fid, string(key), permutedims(overlap))
437
+ end
438
+ end
439
+ end
440
+ h5open("hamiltonians.h5", "w") do fid
441
+ for (key, hamiltonian) in hamiltonians
442
+ write(fid, string(key), permutedims(hamiltonian))
443
+ end
444
+ end
445
+
446
+ info_dict = Dict(
447
+ "fermi_level" => fermi_level,
448
+ "isspinful" => spinful
449
+ )
450
+ open("info.json", "w") do f
451
+ write(f, json(info_dict, 4))
452
+ end
453
+ open("site_positions.dat", "w") do f
454
+ writedlm(f, site_positions)
455
+ end
456
+ open("R_list.dat", "w") do f
457
+ writedlm(f, R_list)
458
+ end
459
+ open("lat.dat", "w") do f
460
+ writedlm(f, lat)
461
+ end
462
+ rlat = 2pi * inv(lat)'
463
+ open("rlat.dat", "w") do f
464
+ writedlm(f, rlat)
465
+ end
466
+ open("orbital_types.dat", "w") do f
467
+ writedlm(f, orbital_types)
468
+ end
469
+ open("element.dat", "w") do f
470
+ writedlm(f, element)
471
+ end
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/openmx_parse.py ADDED
@@ -0,0 +1,425 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ from math import pi
4
+
5
+ import tqdm
6
+ import argparse
7
+ import h5py
8
+ import numpy as np
9
+ from pymatgen.core.structure import Structure
10
+
11
+ from .abacus_get_data import periodic_table
12
+
13
+ Hartree2Ev = 27.2113845
14
+ Ev2Kcalmol = 23.061
15
+ Bohr2R = 0.529177249
16
+
17
+
18
+ def openmx_force_intferface(out_file_dir, save_dir=None, return_Etot=False, return_force=False):
19
+ with open(out_file_dir, 'r') as out_file:
20
+ lines = out_file.readlines()
21
+ for index_line, line in enumerate(lines):
22
+ if line.find('Total energy (Hartree) at MD = 1') != -1:
23
+ assert lines[index_line + 3].find("Uele.") != -1
24
+ assert lines[index_line + 5].find("Ukin.") != -1
25
+ assert lines[index_line + 7].find("UH1.") != -1
26
+ assert lines[index_line + 8].find("Una.") != -1
27
+ assert lines[index_line + 9].find("Unl.") != -1
28
+ assert lines[index_line + 10].find("Uxc0.") != -1
29
+ assert lines[index_line + 20].find("Utot.") != -1
30
+ parse_E = lambda x: float(x.split()[-1])
31
+ E_tot = parse_E(lines[index_line + 20]) * Hartree2Ev
32
+ E_kin = parse_E(lines[index_line + 5]) * Hartree2Ev
33
+ E_delta_ee = parse_E(lines[index_line + 7]) * Hartree2Ev
34
+ E_NA = parse_E(lines[index_line + 8]) * Hartree2Ev
35
+ E_NL = parse_E(lines[index_line + 9]) * Hartree2Ev
36
+ E_xc = parse_E(lines[index_line + 10]) * 2 * Hartree2Ev
37
+ if save_dir is not None:
38
+ with open(os.path.join(save_dir, "openmx_E.json"), 'w') as E_file:
39
+ json.dump({
40
+ "Total energy": E_tot,
41
+ "E_kin": E_kin,
42
+ "E_delta_ee": E_delta_ee,
43
+ "E_NA": E_NA,
44
+ "E_NL": E_NL,
45
+ "E_xc": E_xc
46
+ }, E_file)
47
+ if line.find('xyz-coordinates (Ang) and forces (Hartree/Bohr)') != -1:
48
+ assert lines[index_line + 4].find("<coordinates.forces") != -1
49
+ num_atom = int(lines[index_line + 5])
50
+ forces = np.zeros((num_atom, 3))
51
+ for index_atom in range(num_atom):
52
+ forces[index_atom] = list(
53
+ map(lambda x: float(x) * Hartree2Ev / Bohr2R, lines[index_line + 6 + index_atom].split()[-3:]))
54
+ break
55
+ if save_dir is not None:
56
+ np.savetxt(os.path.join(save_dir, "openmx_forces.dat"), forces)
57
+ ret = (E_kin, E_delta_ee, E_NA, E_NL, E_xc)
58
+ if return_Etot is True:
59
+ ret = ret + (E_tot,)
60
+ if return_force is True:
61
+ ret = ret + (forces,)
62
+ return ret
63
+
64
+
65
+ def openmx_parse_overlap(OLP_dir, output_dir):
66
+ assert os.path.exists(os.path.join(OLP_dir, "output", "overlaps_0.h5")), "No overlap files found"
67
+ assert os.path.exists(os.path.join(OLP_dir, "openmx.out")), "openmx.out not found"
68
+
69
+ overlaps = read_non_parallel_hdf5('overlaps', os.path.join(OLP_dir, 'output'))
70
+ assert len(overlaps.keys()) != 0, 'Can not found any overlap file'
71
+ fid = h5py.File(os.path.join(output_dir, 'overlaps.h5'), 'w')
72
+ for key_str, v in overlaps.items():
73
+ fid[key_str] = v
74
+ fid.close()
75
+
76
+ orbital2l = {"s": 0, "p": 1, "d": 2, "f": 3}
77
+ # parse openmx.out
78
+ with open(os.path.join(OLP_dir, "openmx.out"), "r") as f:
79
+ lines = f.readlines()
80
+ orbital_dict = {}
81
+ lattice = np.zeros((3, 3))
82
+ frac_coords = []
83
+ atomic_elements_str = []
84
+ flag_read_orbital = False
85
+ flag_read_lattice = False
86
+ for index_line, line in enumerate(lines):
87
+ if line.find('Definition.of.Atomic.Species>') != -1:
88
+ flag_read_orbital = False
89
+ if flag_read_orbital:
90
+ element = line.split()[0]
91
+ orbital_str = (line.split()[1]).split('-')[-1]
92
+ l_list = []
93
+ assert len(orbital_str) % 2 == 0
94
+ for index_str in range(len(orbital_str) // 2):
95
+ l_list.extend([orbital2l[orbital_str[index_str * 2]]] * int(orbital_str[index_str * 2 + 1]))
96
+ orbital_dict[element] = l_list
97
+ if line.find('<Definition.of.Atomic.Species') != -1:
98
+ flag_read_orbital = True
99
+
100
+ if line.find('Atoms.UnitVectors.Unit') != -1:
101
+ assert line.split()[1] == "Ang", "Unit of lattice vector is not Angstrom"
102
+ assert lines[index_line + 1].find("<Atoms.UnitVectors") != -1
103
+ lattice[0, :] = np.array(list(map(float, lines[index_line + 2].split())))
104
+ lattice[1, :] = np.array(list(map(float, lines[index_line + 3].split())))
105
+ lattice[2, :] = np.array(list(map(float, lines[index_line + 4].split())))
106
+ flag_read_lattice = True
107
+
108
+ if line.find('Fractional coordinates of the final structure') != -1:
109
+ index_atom = 0
110
+ while (index_line + index_atom + 4) < len(lines):
111
+ index_atom += 1
112
+ line_split = lines[index_line + index_atom + 3].split()
113
+ if len(line_split) == 0:
114
+ break
115
+ assert len(line_split) == 5
116
+ assert line_split[0] == str(index_atom)
117
+ atomic_elements_str.append(line_split[1])
118
+ frac_coords.append(np.array(list(map(float, line_split[2:]))))
119
+ print("Found", len(frac_coords), "atoms")
120
+ if flag_read_lattice is False:
121
+ raise RuntimeError("Could not find lattice vector in openmx.out")
122
+ if len(orbital_dict) == 0:
123
+ raise RuntimeError("Could not find orbital information in openmx.out")
124
+ frac_coords = np.array(frac_coords)
125
+ cart_coords = frac_coords @ lattice
126
+
127
+ np.savetxt(os.path.join(output_dir, "site_positions.dat"), cart_coords.T)
128
+ np.savetxt(os.path.join(output_dir, "lat.dat"), lattice.T)
129
+ np.savetxt(os.path.join(output_dir, "rlat.dat"), np.linalg.inv(lattice) * 2 * pi)
130
+ np.savetxt(os.path.join(output_dir, "element.dat"),
131
+ np.array(list(map(lambda x: periodic_table[x], atomic_elements_str))), fmt='%d')
132
+ with open(os.path.join(output_dir, 'orbital_types.dat'), 'w') as orbital_types_f:
133
+ for element_str in atomic_elements_str:
134
+ for index_l, l in enumerate(orbital_dict[element_str]):
135
+ if index_l == 0:
136
+ orbital_types_f.write(str(l))
137
+ else:
138
+ orbital_types_f.write(f" {l}")
139
+ orbital_types_f.write('\n')
140
+
141
+
142
+ def read_non_parallel_hdf5(name, file_dir, num_p=256):
143
+ Os = {}
144
+ for index_p in range(num_p):
145
+ if os.path.exists(os.path.join(file_dir, f"{name}_{index_p}.h5")):
146
+ fid = h5py.File(os.path.join(file_dir, f"{name}_{index_p}.h5"), 'r')
147
+ for key_str, O_nm in fid.items():
148
+ Os[key_str] = O_nm[...]
149
+ assert not os.path.exists(os.path.join(file_dir, f"{name}_{num_p}.h5")), "Increase num_p because some overlap files are missing"
150
+ return Os
151
+
152
+
153
+ def read_hdf5(name, file_dir):
154
+ Os = {}
155
+ fid = h5py.File(os.path.join(file_dir, f"{name}.h5"), 'r')
156
+ for key_str, O_nm in fid.items():
157
+ Os[key_str] = O_nm[...]
158
+ return Os
159
+
160
+
161
+ class OijLoad:
162
+ def __init__(self, output_dir):
163
+ print("get data from:", output_dir)
164
+ self.if_load_scfout = False
165
+ self.output_dir = output_dir
166
+ term_non_parallel_list = ['H', 'T', 'V_xc', 'O_xc', 'O_dVHart', 'O_NA', 'O_NL', 'Rho']
167
+ self.term_h5_dict = {}
168
+ for term in term_non_parallel_list:
169
+ self.term_h5_dict[term] = read_non_parallel_hdf5(term, output_dir)
170
+
171
+ self.term_h5_dict['H_add'] = {}
172
+ for key_str in self.term_h5_dict['T'].keys():
173
+ tmp = np.zeros_like(self.term_h5_dict['T'][key_str])
174
+ for term in ['T', 'V_xc', 'O_dVHart', 'O_NA', 'O_NL']:
175
+ tmp += self.term_h5_dict[term][key_str]
176
+ self.term_h5_dict['H_add'][key_str] = tmp
177
+
178
+ self.dig_term = {}
179
+ for term in ['E_dVHart_a', 'E_xc_pcc']:
180
+ self.dig_term[term] = np.loadtxt(os.path.join(output_dir, f'{term}.dat'))
181
+
182
+ def cal_Eij(self):
183
+ term_list = ["E_kin", "E_NA", "E_NL", "E_delta_ee", "E_xc"]
184
+ self.Eij = {term: {} for term in term_list}
185
+ self.R_list = []
186
+ for key_str in self.term_h5_dict['T'].keys():
187
+ key = json.loads(key_str)
188
+ R = (key[0], key[1], key[2])
189
+ if R not in self.R_list:
190
+ self.R_list.append(R)
191
+ atom_i = key[3] - 1
192
+ atom_j = key[4] - 1
193
+
194
+ self.Eij["E_NA"][key_str] = (self.term_h5_dict["O_NA"][key_str] * self.term_h5_dict["Rho"][key_str]).sum() * 2
195
+ self.Eij["E_NL"][key_str] = (self.term_h5_dict["O_NL"][key_str] * self.term_h5_dict["Rho"][key_str]).sum() * 2
196
+ self.Eij["E_kin"][key_str] = (self.term_h5_dict["T"][key_str] * self.term_h5_dict["Rho"][key_str]).sum() * 2
197
+ self.Eij["E_delta_ee"][key_str] = (self.term_h5_dict["O_dVHart"][key_str] * self.term_h5_dict["Rho"][key_str]).sum()
198
+ self.Eij["E_xc"][key_str] = (self.term_h5_dict["O_xc"][key_str] * self.term_h5_dict["Rho"][key_str]).sum() * 2
199
+ if (atom_i == atom_j) and (R == (0, 0, 0)):
200
+ self.Eij["E_delta_ee"][key_str] -= self.dig_term['E_dVHart_a'][atom_i]
201
+ self.Eij["E_xc"][key_str] += self.dig_term['E_xc_pcc'][atom_i] * 2
202
+
203
+ def load_scfout(self):
204
+ self.if_load_scfout = True
205
+ term_list = ["hamiltonians", "overlaps", "density_matrixs"]
206
+ default_dtype = np.complex128
207
+
208
+ for term in term_list:
209
+ self.term_h5_dict[term] = read_hdf5(term, self.output_dir)
210
+
211
+ site_positions = np.loadtxt(os.path.join(self.output_dir, 'site_positions.dat')).T
212
+ self.lat = np.loadtxt(os.path.join(self.output_dir, 'lat.dat')).T
213
+ self.rlat = np.loadtxt(os.path.join(self.output_dir, 'rlat.dat')).T
214
+ nsites = site_positions.shape[0]
215
+
216
+ self.orbital_types = []
217
+ with open(os.path.join(self.output_dir, 'orbital_types.dat'), 'r') as orbital_types_f:
218
+ for index_site in range(nsites):
219
+ self.orbital_types.append(np.array(list(map(int, orbital_types_f.readline().split()))))
220
+ site_norbits = list(map(lambda x: (2 * x + 1).sum(), self.orbital_types))
221
+ site_norbits_cumsum = np.cumsum(site_norbits)
222
+ norbits = sum(site_norbits)
223
+
224
+ self.term_R_dict = {term: {} for term in self.term_h5_dict.keys()}
225
+ for key_str in tqdm.tqdm(self.term_h5_dict['overlaps'].keys()):
226
+ key = json.loads(key_str)
227
+ R = (key[0], key[1], key[2])
228
+ atom_i = key[3] - 1
229
+ atom_j = key[4] - 1
230
+ if R not in self.term_R_dict['overlaps']:
231
+ for term_R in self.term_R_dict.values():
232
+ term_R[R] = np.zeros((norbits, norbits), dtype=default_dtype)
233
+ matrix_slice_i = slice(site_norbits_cumsum[atom_i] - site_norbits[atom_i], site_norbits_cumsum[atom_i])
234
+ matrix_slice_j = slice(site_norbits_cumsum[atom_j] - site_norbits[atom_j], site_norbits_cumsum[atom_j])
235
+ for term, term_R in self.term_R_dict.items():
236
+ term_R[R][matrix_slice_i, matrix_slice_j] = np.array(self.term_h5_dict[term][key_str]).astype(
237
+ dtype=default_dtype)
238
+
239
+ def get_E_band(self):
240
+ E_band = 0.0
241
+ for R in self.term_R_dict['T'].keys():
242
+ E_band += (self.term_R_dict['density_matrixs'][R] * self.term_R_dict['H_add'][R]).sum()
243
+ return E_band
244
+
245
+ def get_E_band2(self):
246
+ E_band = 0.0
247
+ for R in self.term_R_dict['T'].keys():
248
+ E_band += (self.term_R_dict['density_matrixs'][R] * self.term_R_dict['hamiltonians'][R]).sum()
249
+ return E_band
250
+
251
+ def get_E_band3(self):
252
+ E_band = 0.0
253
+ for R in self.term_R_dict['T'].keys():
254
+ E_band += (self.term_R_dict['density_matrixs'][R] * self.term_R_dict['H'][R]).sum()
255
+ return E_band
256
+
257
+ def sum_Eij(self, term):
258
+ ret = 0.0
259
+ for value in self.Eij[term].values():
260
+ ret += value
261
+ return ret
262
+
263
+ def get_E_NL(self):
264
+ assert self.if_load_scfout == True
265
+ E_NL = 0.0
266
+ for R in self.term_R_dict['T'].keys():
267
+ E_NL += (self.term_R_dict['density_matrixs'][R] * self.term_R_dict['O_NL'][R]).sum()
268
+ return E_NL
269
+
270
+ def save_Vij(self, save_dir):
271
+ for term, h5_file_name in zip(["O_NA", "O_dVHart", "V_xc", "H_add", "Rho"],
272
+ ["V_nas", "V_delta_ees", "V_xcs", "hamiltonians", "density_matrixs"]):
273
+ fid = h5py.File(os.path.join(save_dir, f'{h5_file_name}.h5'), "w")
274
+ for k, v in self.term_h5_dict[term].items():
275
+ fid[k] = v
276
+ fid.close()
277
+
278
+ def get_E5ij(self):
279
+ term_list = ["E_kin", "E_NA", "E_NL", "E_delta_ee", "E_xc"]
280
+ E_dict = {term: 0 for term in term_list}
281
+ E5ij = {}
282
+ for key_str in self.Eij[term_list[0]].keys():
283
+ tmp = 0.0
284
+ for term in term_list:
285
+ v = self.Eij[term][key_str]
286
+ E_dict[term] += v
287
+ tmp += v
288
+ if key_str in E5ij:
289
+ E5ij[key_str] += tmp
290
+ else:
291
+ E5ij[key_str] = tmp
292
+ return E5ij, E_dict
293
+
294
+ def save_Eij(self, save_dir):
295
+ fid_tmp, E_dict = self.get_E5ij()
296
+
297
+ fid = h5py.File(os.path.join(save_dir, f'E_ij.h5'), "w")
298
+ for k, v in fid_tmp.items():
299
+ fid[k] = v
300
+ fid.close()
301
+
302
+ with open(os.path.join(save_dir, "openmx_E_ij_E.json"), 'w') as E_file:
303
+ json.dump({
304
+ "E_kin": E_dict["E_kin"],
305
+ "E_delta_ee": E_dict["E_delta_ee"],
306
+ "E_NA": E_dict["E_NA"],
307
+ "E_NL": E_dict["E_NL"],
308
+ "E_xc": E_dict["E_xc"]
309
+ }, E_file)
310
+
311
+ # return E_dict["E_delta_ee"], E_dict["E_xc"]
312
+ return E_dict["E_kin"], E_dict["E_delta_ee"], E_dict["E_NA"], E_dict["E_NL"], E_dict["E_xc"]
313
+
314
+ def get_E5i(self):
315
+ term_list = ["E_kin", "E_NA", "E_NL", "E_delta_ee", "E_xc"]
316
+ E_dict = {term: 0 for term in term_list}
317
+ E5i = {}
318
+ for key_str in self.Eij[term_list[0]].keys():
319
+ key = json.loads(key_str)
320
+ atom_i_str = str(key[3] - 1)
321
+ tmp = 0.0
322
+ for term in term_list:
323
+ v = self.Eij[term][key_str]
324
+ E_dict[term] += v
325
+ tmp += v
326
+ if atom_i_str in E5i:
327
+ E5i[atom_i_str] += tmp
328
+ else:
329
+ E5i[atom_i_str] = tmp
330
+ return E5i, E_dict
331
+
332
+ def save_Ei(self, save_dir):
333
+ fid_tmp, E_dict = self.get_E5i()
334
+
335
+ fid = h5py.File(os.path.join(save_dir, f'E_i.h5'), "w")
336
+ for k, v in fid_tmp.items():
337
+ fid[k] = v
338
+ fid.close()
339
+ with open(os.path.join(save_dir, "openmx_E_i_E.json"), 'w') as E_file:
340
+ json.dump({
341
+ "E_kin": E_dict["E_kin"],
342
+ "E_delta_ee": E_dict["E_delta_ee"],
343
+ "E_NA": E_dict["E_NA"],
344
+ "E_NL": E_dict["E_NL"],
345
+ "E_xc": E_dict["E_xc"]
346
+ }, E_file)
347
+ return E_dict["E_kin"], E_dict["E_delta_ee"], E_dict["E_NA"], E_dict["E_NL"], E_dict["E_xc"]
348
+
349
+ def get_R_list(self):
350
+ return self.R_list
351
+
352
+
353
+ class GetEEiEij:
354
+ def __init__(self, input_dir):
355
+ self.load_kernel = OijLoad(os.path.join(input_dir, "output"))
356
+ self.E_kin, self.E_delta_ee, self.E_NA, self.E_NL, self.E_xc, self.Etot, self.force = openmx_force_intferface(
357
+ os.path.join(input_dir, "openmx.out"), save_dir=None, return_Etot=True, return_force=True)
358
+ self.load_kernel.cal_Eij()
359
+
360
+ def get_Etot(self):
361
+ # unit: kcal mol^-1
362
+ return self.Etot * Ev2Kcalmol
363
+
364
+ def get_force(self):
365
+ # unit: kcal mol^-1 Angstrom^-1
366
+ return self.force * Ev2Kcalmol
367
+
368
+ def get_E5(self):
369
+ # unit: kcal mol^-1
370
+ return (self.E_kin + self.E_delta_ee + self.E_NA + self.E_NL + self.E_xc) * Ev2Kcalmol
371
+
372
+ def get_E5i(self):
373
+ # unit: kcal mol^-1
374
+ E5i, E_from_i_dict = self.load_kernel.get_E5i()
375
+ assert np.allclose(self.E_kin, E_from_i_dict["E_kin"])
376
+ assert np.allclose(self.E_delta_ee, E_from_i_dict["E_delta_ee"])
377
+ assert np.allclose(self.E_NA, E_from_i_dict["E_NA"])
378
+ assert np.allclose(self.E_NL, E_from_i_dict["E_NL"])
379
+ assert np.allclose(self.E_xc, E_from_i_dict["E_xc"], rtol=1.e-3)
380
+ return {k: v * Ev2Kcalmol for k, v in E5i.items()}
381
+
382
+ def get_E5ij(self):
383
+ # unit: kcal mol^-1
384
+ E5ij, E_from_ij_dict = self.load_kernel.get_E5ij()
385
+ assert np.allclose(self.E_kin, E_from_ij_dict["E_kin"])
386
+ assert np.allclose(self.E_delta_ee, E_from_ij_dict["E_delta_ee"])
387
+ assert np.allclose(self.E_NA, E_from_ij_dict["E_NA"])
388
+ assert np.allclose(self.E_NL, E_from_ij_dict["E_NL"])
389
+ assert np.allclose(self.E_xc, E_from_ij_dict["E_xc"], rtol=1.e-3)
390
+ return {k: v * Ev2Kcalmol for k, v in E5ij.items()}
391
+
392
+
393
+ if __name__ == '__main__':
394
+ parser = argparse.ArgumentParser(description='Predict Hamiltonian')
395
+ parser.add_argument(
396
+ '--input_dir', type=str, default='./',
397
+ help='path of openmx.out, and output'
398
+ )
399
+ parser.add_argument(
400
+ '--output_dir', type=str, default='./',
401
+ help='path of output E_xc_ij.h5, E_delta_ee_ij.h5, site_positions.dat, lat.dat, element.dat, and R_list.dat'
402
+ )
403
+ parser.add_argument('--Ei', action='store_true')
404
+ parser.add_argument('--stru_dir', type=str, default='POSCAR', help='path of structure file')
405
+ args = parser.parse_args()
406
+
407
+ os.makedirs(args.output_dir, exist_ok=True)
408
+ load_kernel = OijLoad(os.path.join(args.input_dir, "output"))
409
+ E_kin, E_delta_ee, E_NA, E_NL, E_xc = openmx_force_intferface(os.path.join(args.input_dir, "openmx.out"), args.output_dir)
410
+ load_kernel.cal_Eij()
411
+ if args.Ei:
412
+ E_kin_from_ij, E_delta_ee_from_ij, E_NA_from_ij, E_NL_from_ij, E_xc_from_ij = load_kernel.save_Ei(args.output_dir)
413
+ else:
414
+ E_kin_from_ij, E_delta_ee_from_ij, E_NA_from_ij, E_NL_from_ij, E_xc_from_ij = load_kernel.save_Eij(args.output_dir)
415
+ assert np.allclose(E_kin, E_kin_from_ij)
416
+ assert np.allclose(E_delta_ee, E_delta_ee_from_ij)
417
+ assert np.allclose(E_NA, E_NA_from_ij)
418
+ assert np.allclose(E_NL, E_NL_from_ij)
419
+ assert np.allclose(E_xc, E_xc_from_ij, rtol=1.e-3)
420
+
421
+ structure = Structure.from_file(args.stru_dir)
422
+ np.savetxt(os.path.join(args.output_dir, "site_positions.dat"), structure.cart_coords.T)
423
+ np.savetxt(os.path.join(args.output_dir, "lat.dat"), structure.lattice.matrix.T)
424
+ np.savetxt(os.path.join(args.output_dir, "element.dat"), structure.atomic_numbers, fmt='%d')
425
+ np.savetxt(os.path.join(args.output_dir, "R_list.dat"), load_kernel.get_R_list(), fmt='%d')
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/periodic_table.json ADDED
The diff for this file is too large to render. See raw diff
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/preprocess_default.ini ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [basic]
2
+ raw_dir = /your/own/path
3
+ processed_dir = /your/own/path
4
+ target = hamiltonian
5
+ interface = openmx
6
+ multiprocessing = 0
7
+ local_coordinate = True
8
+ get_S = False
9
+
10
+ [interpreter]
11
+ julia_interpreter = julia
12
+
13
+ [graph]
14
+ radius = -1.0
15
+ create_from_DFT = True
16
+ r2_rand = False
17
+
18
+ [magnetic_moment]
19
+ parse_magnetic_moment = False
20
+ magnetic_element = ["Cr", "Mn", "Fe", "Co", "Ni"]
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/preprocess/siesta_get_data.py ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import numpy as np
3
+ from numpy.core.fromnumeric import sort
4
+ import scipy as sp
5
+ import h5py
6
+ import json
7
+ from scipy.io import FortranFile
8
+
9
+ # Transfer SIESTA output to DeepH format
10
+ # DeepH-pack: https://deeph-pack.readthedocs.io/en/latest/index.html
11
+ # Coded by ZC Tang @ Tsinghua Univ. e-mail: az_txycha@126.com
12
+
13
+ def siesta_parse(input_path, output_path):
14
+ input_path = os.path.abspath(input_path)
15
+ output_path = os.path.abspath(output_path)
16
+ os.makedirs(output_path, exist_ok=True)
17
+
18
+ # finds system name
19
+ f_list = os.listdir(input_path)
20
+ for f_name in f_list:
21
+ if f_name[::-1][0:9] == 'XDNI_BRO.':
22
+ system_name = f_name[:-9]
23
+
24
+ with open('{}/{}.STRUCT_OUT'.format(input_path,system_name), 'r') as struct: # structure info from standard output
25
+ lattice = np.empty((3,3))
26
+ for i in range(3):
27
+ line = struct.readline()
28
+ linesplit = line.split()
29
+ lattice[i,:] = linesplit[:]
30
+ np.savetxt('{}/lat.dat'.format(output_path), np.transpose(lattice), fmt='%.18e')
31
+ line = struct.readline()
32
+ linesplit = line.split()
33
+ num_atoms = int(linesplit[0])
34
+ atom_coord = np.empty((num_atoms, 4))
35
+ for i in range(num_atoms):
36
+ line = struct.readline()
37
+ linesplit = line.split()
38
+ atom_coord[i, :] = linesplit[1:]
39
+ np.savetxt('{}/element.dat'.format(output_path), atom_coord[:,0], fmt='%d')
40
+
41
+ atom_coord_cart = np.genfromtxt('{}/{}.XV'.format(input_path,system_name),skip_header = 4)
42
+ atom_coord_cart = atom_coord_cart[:,2:5] * 0.529177249
43
+ np.savetxt('{}/site_positions.dat'.format(output_path), np.transpose(atom_coord_cart))
44
+
45
+ orb_indx = np.genfromtxt('{}/{}.ORB_INDX'.format(input_path,system_name), skip_header=3, skip_footer=17)
46
+ # orb_indx rows: 0 orbital id 1 atom id 2 atom type 3 element symbol
47
+ # 4 orbital id within atom 5 n 6 l
48
+ # 7 m 8 zeta 9 Polarized? 10 orbital symmetry
49
+ # 11 rc(a.u.) 12-14 R 15 equivalent orbital index in uc
50
+
51
+ orb_indx[:,12:15]=orb_indx[:,12:15]
52
+
53
+ with open('{}/R_list.dat'.format(output_path),'w') as R_list_f:
54
+ R_prev = np.empty(3)
55
+ for i in range(len(orb_indx)):
56
+ R = orb_indx[i, 12:15]
57
+ if (R != R_prev).any():
58
+ R_prev = R
59
+ R_list_f.write('{} {} {}\n'.format(int(R[0]), int(R[1]), int(R[2])))
60
+
61
+ ia2Riua = np.empty((0,4)) #DeepH key
62
+ ia = 0
63
+ for i in range(len(orb_indx)):
64
+ if orb_indx[i][1] != ia:
65
+ ia = orb_indx[i][1]
66
+ Riua = np.empty((1,4))
67
+ Riua[0,0:3] = orb_indx[i][12:15]
68
+ iuo = int(orb_indx[i][15])
69
+ iua = int(orb_indx[iuo-1,1])
70
+ Riua[0,3] = int(iua)
71
+ ia2Riua = np.append(ia2Riua, Riua)
72
+ ia2Riua = ia2Riua.reshape(int(len(ia2Riua)/4),4)
73
+
74
+
75
+ #hamiltonians.h5, density_matrixs.h5, overlap.h5
76
+ info = {'nsites' : num_atoms, 'isorthogonal': False, 'isspinful': False, 'norbits': len(orb_indx)}
77
+ with open('{}/info.json'.format(output_path), 'w') as info_f:
78
+ json.dump(info, info_f)
79
+
80
+ a1 = lattice[0, :]
81
+ a2 = lattice[1, :]
82
+ a3 = lattice[2, :]
83
+ b1 = 2 * np.pi * np.cross(a2, a3) / (np.dot(a1, np.cross(a2, a3)))
84
+ b2 = 2 * np.pi * np.cross(a3, a1) / (np.dot(a2, np.cross(a3, a1)))
85
+ b3 = 2 * np.pi * np.cross(a1, a2) / (np.dot(a3, np.cross(a1, a2)))
86
+ rlattice = np.array([b1, b2, b3])
87
+ np.savetxt('{}/rlat.dat'.format(output_path), np.transpose(rlattice), fmt='%.18e')
88
+
89
+ # Cope with orbital type information
90
+ i = 0
91
+ with open('{}/orbital_types.dat'.format(output_path), 'w') as orb_type_f:
92
+ atom_current = 0
93
+ while True: # Loop over atoms in unitcell
94
+ if atom_current != orb_indx[i, 1]:
95
+ if atom_current != 0:
96
+ for j in range(4):
97
+ for _ in range(int(atom_orb_cnt[j]/(2*j+1))):
98
+ orb_type_f.write('{} '.format(j))
99
+ orb_type_f.write('\n')
100
+
101
+ atom_current = int(orb_indx[i, 1])
102
+ atom_orb_cnt = np.array([0,0,0,0]) # number of s, p, d, f orbitals in specific atom
103
+ l = int(orb_indx[i, 6])
104
+ atom_orb_cnt[l] += 1
105
+ i += 1
106
+ if i > len(orb_indx)-1:
107
+ for j in range(4):
108
+ for _ in range(int(atom_orb_cnt[j]/(2*j+1))):
109
+ orb_type_f.write('{} '.format(j))
110
+ orb_type_f.write('\n')
111
+ break
112
+ if orb_indx[i, 0] != orb_indx[i, 15]:
113
+ for j in range(4):
114
+ for _ in range(int(atom_orb_cnt[j]/(2*j+1))):
115
+ orb_type_f.write('{} '.format(j))
116
+ orb_type_f.write('\n')
117
+ break
118
+
119
+ # yields key for *.h5 file
120
+ orb2deephorb = np.zeros((len(orb_indx), 5))
121
+ atom_current = 1
122
+ orb_atom_current = np.empty((0)) # stores orbitals' id in siesta, n, l, m and z, will be reshaped into orb*5
123
+ t = 0
124
+ for i in range(len(orb_indx)):
125
+ orb_atom_current = np.append(orb_atom_current, i)
126
+ orb_atom_current = np.append(orb_atom_current, orb_indx[i,5:9])
127
+ if i != len(orb_indx)-1 :
128
+ if orb_indx[i+1,1] != atom_current:
129
+ orb_atom_current = np.reshape(orb_atom_current,((int(len(orb_atom_current)/5),5)))
130
+ for j in range(len(orb_atom_current)):
131
+ if orb_atom_current[j,2] == 1: #p
132
+ if orb_atom_current[j,3] == -1:
133
+ orb_atom_current[j,3] = 0
134
+ elif orb_atom_current[j,3] == 0:
135
+ orb_atom_current[j,3] = 1
136
+ elif orb_atom_current[j,3] == 1:
137
+ orb_atom_current[j,3] = -1
138
+ if orb_atom_current[j,2] == 2: #d
139
+ if orb_atom_current[j,3] == -2:
140
+ orb_atom_current[j,3] = 0
141
+ elif orb_atom_current[j,3] == -1:
142
+ orb_atom_current[j,3] = 2
143
+ elif orb_atom_current[j,3] == 0:
144
+ orb_atom_current[j,3] = -2
145
+ elif orb_atom_current[j,3] == 1:
146
+ orb_atom_current[j,3] = 1
147
+ elif orb_atom_current[j,3] == 2:
148
+ orb_atom_current[j,3] = -1
149
+ if orb_atom_current[j,2] == 3: #f
150
+ if orb_atom_current[j,3] == -3:
151
+ orb_atom_current[j,3] = 0
152
+ elif orb_atom_current[j,3] == -2:
153
+ orb_atom_current[j,3] = 1
154
+ elif orb_atom_current[j,3] == -1:
155
+ orb_atom_current[j,3] = -1
156
+ elif orb_atom_current[j,3] == 0:
157
+ orb_atom_current[j,3] = 2
158
+ elif orb_atom_current[j,3] == 1:
159
+ orb_atom_current[j,3] = -2
160
+ elif orb_atom_current[j,3] == 2:
161
+ orb_atom_current[j,3] = 3
162
+ elif orb_atom_current[j,3] == 3:
163
+ orb_atom_current[j,3] = -3
164
+ sort_index = np.zeros(len(orb_atom_current))
165
+ for j in range(len(orb_atom_current)):
166
+ sort_index[j] = orb_atom_current[j,3] + 10 * orb_atom_current[j,4] + 100 * orb_atom_current[j,1] + 1000 * orb_atom_current[j,2]
167
+ orb_order = np.argsort(sort_index)
168
+ tmpt = np.empty(len(orb_order))
169
+ for j in range(len(orb_order)):
170
+ tmpt[orb_order[j]] = j
171
+ orb_order = tmpt
172
+ for j in range(len(orb_atom_current)):
173
+ orb2deephorb[t,0:3] = np.round(orb_indx[t,12:15])
174
+ orb2deephorb[t,3] = ia2Riua[int(orb_indx[t,1])-1,3]
175
+ orb2deephorb[t,4] = int(orb_order[j])
176
+ t += 1
177
+ atom_current += 1
178
+ orb_atom_current = np.empty((0))
179
+
180
+ orb_atom_current = np.reshape(orb_atom_current,((int(len(orb_atom_current)/5),5)))
181
+ for j in range(len(orb_atom_current)):
182
+ if orb_atom_current[j,2] == 1:
183
+ if orb_atom_current[j,3] == -1:
184
+ orb_atom_current[j,3] = 0
185
+ elif orb_atom_current[j,3] == 0:
186
+ orb_atom_current[j,3] = 1
187
+ elif orb_atom_current[j,3] == 1:
188
+ orb_atom_current[j,3] = -1
189
+ if orb_atom_current[j,2] == 2:
190
+ if orb_atom_current[j,3] == -2:
191
+ orb_atom_current[j,3] = 0
192
+ elif orb_atom_current[j,3] == -1:
193
+ orb_atom_current[j,3] = 2
194
+ elif orb_atom_current[j,3] == 0:
195
+ orb_atom_current[j,3] = -2
196
+ elif orb_atom_current[j,3] == 1:
197
+ orb_atom_current[j,3] = 1
198
+ elif orb_atom_current[j,3] == 2:
199
+ orb_atom_current[j,3] = -1
200
+ if orb_atom_current[j,2] == 3: #f
201
+ if orb_atom_current[j,3] == -3:
202
+ orb_atom_current[j,3] = 0
203
+ elif orb_atom_current[j,3] == -2:
204
+ orb_atom_current[j,3] = 1
205
+ elif orb_atom_current[j,3] == -1:
206
+ orb_atom_current[j,3] = -1
207
+ elif orb_atom_current[j,3] == 0:
208
+ orb_atom_current[j,3] = 2
209
+ elif orb_atom_current[j,3] == 1:
210
+ orb_atom_current[j,3] = -2
211
+ elif orb_atom_current[j,3] == 2:
212
+ orb_atom_current[j,3] = 3
213
+ elif orb_atom_current[j,3] == 3:
214
+ orb_atom_current[j,3] = -3
215
+ sort_index = np.zeros(len(orb_atom_current))
216
+ for j in range(len(orb_atom_current)):
217
+ sort_index[j] = orb_atom_current[j,3] + 10 * orb_atom_current[j,4] + 100 * orb_atom_current[j,1] + 1000 * orb_atom_current[j,2]
218
+ orb_order = np.argsort(sort_index)
219
+ tmpt = np.empty(len(orb_order))
220
+ for j in range(len(orb_order)):
221
+ tmpt[orb_order[j]] = j
222
+ orb_order = tmpt
223
+ for j in range(len(orb_atom_current)):
224
+ orb2deephorb[t,0:3] = np.round(orb_indx[t,12:15])
225
+ orb2deephorb[t,3] = ia2Riua[int(orb_indx[t,1])-1,3]
226
+ orb2deephorb[t,4] = int(orb_order[j])
227
+ t += 1
228
+
229
+ # Read Useful info of HSX, We only need H and S from this file, but due to structure of fortran unformatted, extra information must be read
230
+ f = FortranFile('{}/{}.HSX'.format(input_path,system_name), 'r')
231
+ tmpt = f.read_ints() # no_u, no_s, nspin, nh
232
+ no_u = tmpt[0]
233
+ no_s = tmpt[1]
234
+ nspin = tmpt[2]
235
+ nh = tmpt[3]
236
+ tmpt = f.read_ints() # gamma
237
+ tmpt = f.read_ints() # indxuo
238
+ tmpt = f.read_ints() # numh
239
+ maxnumh = max(tmpt)
240
+ listh = np.zeros((no_u, maxnumh),dtype=int)
241
+ for i in range(no_u):
242
+ tmpt=f.read_ints() # listh
243
+ for j in range(len(tmpt)):
244
+ listh[i,j] = tmpt[j]
245
+
246
+ # finds set of connected atoms
247
+ connected_atoms = set()
248
+ for i in range(no_u):
249
+ for j in range(maxnumh):
250
+ if listh[i,j] == 0:
251
+ #print(j)
252
+ break
253
+ else:
254
+ atom_1 = int(orb2deephorb[i,3])#orbit i belongs to atom_1
255
+ atom_2 = int(orb2deephorb[listh[i,j]-1,3])# orbit j belongs to atom_2
256
+ Rijk = orb2deephorb[listh[i,j]-1,0:3]
257
+ Rijk = Rijk.astype(int)
258
+ connected_atoms = connected_atoms | set(['[{}, {}, {}, {}, {}]'.format(Rijk[0],Rijk[1],Rijk[2],atom_1,atom_2)])
259
+
260
+
261
+ H_block_sparse = dict()
262
+ for atom_pair in connected_atoms:
263
+ H_block_sparse[atom_pair] = []
264
+ # converts csr-like matrix into coo form in atomic pairs
265
+ for i in range(nspin):
266
+ for j in range(no_u):
267
+ tmpt=f.read_reals(dtype='<f4') # Hamiltonian
268
+ for k in range(len(tmpt)):
269
+ m = 0 # several orbits in siesta differs with DeepH in a (-1) factor
270
+ i2 = j
271
+ j2 = k
272
+ atom_1 = int(orb2deephorb[i2,3])
273
+ m += orb_indx[i2,7]
274
+ atom_2 = int(orb2deephorb[listh[i2,j2]-1,3])
275
+ m += orb_indx[listh[i2,j2]-1,7]
276
+ Rijk = orb2deephorb[listh[i2,j2]-1,0:3]
277
+ Rijk = Rijk.astype(int)
278
+ H_block_sparse['[{}, {}, {}, {}, {}]'.format(Rijk[0],Rijk[1],Rijk[2],atom_1,atom_2)].append([int(orb2deephorb[i2,4]),int(orb2deephorb[listh[i2,j2]-1,4]),tmpt[k]*((-1)**m)])
279
+ pass
280
+
281
+ S_block_sparse = dict()
282
+ for atom_pair in connected_atoms:
283
+ S_block_sparse[atom_pair] = []
284
+
285
+ for j in range(no_u):
286
+ tmpt=f.read_reals(dtype='<f4') # Overlap
287
+ for k in range(len(tmpt)):
288
+ m = 0
289
+ i2 = j
290
+ j2 = k
291
+ atom_1 = int(orb2deephorb[i2,3])
292
+ m += orb_indx[i2,7]
293
+ atom_2 = int(orb2deephorb[listh[i2,j2]-1,3])
294
+ m += orb_indx[listh[i2,j2]-1,7]
295
+ Rijk = orb2deephorb[listh[i2,j2]-1,0:3]
296
+ Rijk = Rijk.astype(int)
297
+ S_block_sparse['[{}, {}, {}, {}, {}]'.format(Rijk[0],Rijk[1],Rijk[2],atom_1,atom_2)].append([int(orb2deephorb[i2,4]),int(orb2deephorb[listh[i2,j2]-1,4]),tmpt[k]*((-1)**m)])
298
+ pass
299
+ pass
300
+
301
+ # finds number of orbitals of each atoms
302
+ nua = int(max(orb2deephorb[:,3]))
303
+ atom2nu = np.zeros(nua)
304
+ for i in range(len(orb_indx)):
305
+ if orb_indx[i,12]==0 and orb_indx[i,13]==0 and orb_indx[i,14]==0:
306
+ if orb_indx[i,4] > atom2nu[int(orb_indx[i,1])-1]:
307
+ atom2nu[int(orb_indx[i,1]-1)] = int(orb_indx[i,4])
308
+
309
+ # converts coo sparse matrix into full matrix
310
+ for Rijkab in H_block_sparse.keys():
311
+ sparse_form = H_block_sparse[Rijkab]
312
+ ia1 = int(Rijkab[1:-1].split(',')[3])
313
+ ia2 = int(Rijkab[1:-1].split(',')[4])
314
+ tmpt = np.zeros((int(atom2nu[ia1-1]),int(atom2nu[ia2-1])))
315
+ for i in range(len(sparse_form)):
316
+ tmpt[int(sparse_form[i][0]),int(sparse_form[i][1])]=sparse_form[i][2]/0.036749324533634074/2
317
+ H_block_sparse[Rijkab]=tmpt
318
+ f.close()
319
+ f = h5py.File('{}/hamiltonians.h5'.format(output_path),'w')
320
+ for Rijkab in H_block_sparse.keys():
321
+ f[Rijkab] = H_block_sparse[Rijkab]
322
+
323
+ for Rijkab in S_block_sparse.keys():
324
+ sparse_form = S_block_sparse[Rijkab]
325
+ ia1 = int(Rijkab[1:-1].split(',')[3])
326
+ ia2 = int(Rijkab[1:-1].split(',')[4])
327
+ tmpt = np.zeros((int(atom2nu[ia1-1]),int(atom2nu[ia2-1])))
328
+ for i in range(len(sparse_form)):
329
+ tmpt[int(sparse_form[i][0]),int(sparse_form[i][1])]=sparse_form[i][2]
330
+ S_block_sparse[Rijkab]=tmpt
331
+
332
+ f.close()
333
+ f = h5py.File('{}/overlaps.h5'.format(output_path),'w')
334
+ for Rijkab in S_block_sparse.keys():
335
+ f[Rijkab] = S_block_sparse[Rijkab]
336
+ f.close()
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/rotate.py ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os.path
3
+ import warnings
4
+
5
+ import numpy as np
6
+ import h5py
7
+ import torch
8
+ from e3nn.o3 import Irrep, Irreps, matrix_to_angles
9
+
10
+ from deeph import load_orbital_types
11
+
12
+ dtype_dict = {
13
+ np.float32: (torch.float32, torch.float32, torch.complex64),
14
+ np.float64: (torch.float64, torch.float64, torch.complex128),
15
+ np.complex64: (torch.complex64, torch.float32, torch.complex64),
16
+ np.complex128: (torch.complex128, torch.float64, torch.complex128),
17
+ torch.float32: (torch.float32, torch.float32, torch.complex64),
18
+ torch.float64: (torch.float64, torch.float64, torch.complex128),
19
+ torch.complex64: (torch.complex64, torch.float32, torch.complex64),
20
+ torch.complex128: (torch.complex128, torch.float64, torch.complex128),
21
+ }
22
+
23
+
24
+ class Rotate:
25
+ def __init__(self, torch_dtype, torch_dtype_real=torch.float64, torch_dtype_complex=torch.cdouble,
26
+ device=torch.device('cpu'), spinful=False):
27
+ self.dtype = torch_dtype
28
+ self.torch_dtype_real = torch_dtype_real
29
+ self.device = device
30
+ self.spinful = spinful
31
+ sqrt_2 = 1.4142135623730951
32
+ self.Us_openmx = {
33
+ 0: torch.tensor([1], dtype=torch_dtype_complex, device=device),
34
+ 1: torch.tensor([[-1 / sqrt_2, 1j / sqrt_2, 0], [0, 0, 1], [1 / sqrt_2, 1j / sqrt_2, 0]],
35
+ dtype=torch_dtype_complex, device=device),
36
+ 2: torch.tensor([[0, 1 / sqrt_2, -1j / sqrt_2, 0, 0],
37
+ [0, 0, 0, -1 / sqrt_2, 1j / sqrt_2],
38
+ [1, 0, 0, 0, 0],
39
+ [0, 0, 0, 1 / sqrt_2, 1j / sqrt_2],
40
+ [0, 1 / sqrt_2, 1j / sqrt_2, 0, 0]], dtype=torch_dtype_complex, device=device),
41
+ 3: torch.tensor([[0, 0, 0, 0, 0, -1 / sqrt_2, 1j / sqrt_2],
42
+ [0, 0, 0, 1 / sqrt_2, -1j / sqrt_2, 0, 0],
43
+ [0, -1 / sqrt_2, 1j / sqrt_2, 0, 0, 0, 0],
44
+ [1, 0, 0, 0, 0, 0, 0],
45
+ [0, 1 / sqrt_2, 1j / sqrt_2, 0, 0, 0, 0],
46
+ [0, 0, 0, 1 / sqrt_2, 1j / sqrt_2, 0, 0],
47
+ [0, 0, 0, 0, 0, 1 / sqrt_2, 1j / sqrt_2]], dtype=torch_dtype_complex, device=device),
48
+ }
49
+ self.Us_openmx2wiki = {
50
+ 0: torch.eye(1, dtype=torch_dtype).to(device=device),
51
+ 1: torch.eye(3, dtype=torch_dtype)[[1, 2, 0]].to(device=device),
52
+ 2: torch.eye(5, dtype=torch_dtype)[[2, 4, 0, 3, 1]].to(device=device),
53
+ 3: torch.eye(7, dtype=torch_dtype)[[6, 4, 2, 0, 1, 3, 5]].to(device=device)
54
+ }
55
+ self.Us_wiki2openmx = {k: v.T for k, v in self.Us_openmx2wiki.items()}
56
+
57
+ def rotate_e3nn_v(self, v, R, l, order_xyz=True):
58
+ if self.spinful:
59
+ raise NotImplementedError
60
+ assert len(R.shape) == 2
61
+ if order_xyz:
62
+ R_e3nn = self.rotate_matrix_convert(R)
63
+ else:
64
+ R_e3nn = R
65
+ return v @ Irrep(l, 1).D_from_matrix(R_e3nn)
66
+
67
+ def rotate_openmx_H_old(self, H, R, l_lefts, l_rights, order_xyz=True):
68
+ assert len(R.shape) == 2
69
+ if order_xyz:
70
+ R_e3nn = self.rotate_matrix_convert(R)
71
+ else:
72
+ R_e3nn = R
73
+
74
+ block_lefts = []
75
+ for l_left in l_lefts:
76
+ block_lefts.append(
77
+ self.Us_openmx2wiki[l_left].T @ Irrep(l_left, 1).D_from_matrix(R_e3nn) @ self.Us_openmx2wiki[l_left])
78
+ rotation_left = torch.block_diag(*block_lefts)
79
+
80
+ block_rights = []
81
+ for l_right in l_rights:
82
+ block_rights.append(
83
+ self.Us_openmx2wiki[l_right].T @ Irrep(l_right, 1).D_from_matrix(R_e3nn) @ self.Us_openmx2wiki[l_right])
84
+ rotation_right = torch.block_diag(*block_rights)
85
+
86
+ return torch.einsum("cd,ca,db->ab", H, rotation_left, rotation_right)
87
+
88
+ def rotate_openmx_H(self, H, R, l_lefts, l_rights, order_xyz=True):
89
+ # spin-1/2 is writed by gongxx
90
+ assert len(R.shape) == 2
91
+ if order_xyz:
92
+ R_e3nn = self.rotate_matrix_convert(R)
93
+ else:
94
+ R_e3nn = R
95
+ irreps_left = Irreps([(1, (l, 1)) for l in l_lefts])
96
+ irreps_right = Irreps([(1, (l, 1)) for l in l_rights])
97
+ U_left = irreps_left.D_from_matrix(R_e3nn)
98
+ U_right = irreps_right.D_from_matrix(R_e3nn)
99
+ openmx2wiki_left = torch.block_diag(*[self.Us_openmx2wiki[l] for l in l_lefts])
100
+ openmx2wiki_right = torch.block_diag(*[self.Us_openmx2wiki[l] for l in l_rights])
101
+ if self.spinful:
102
+ U_left = torch.kron(self.D_one_half(R_e3nn), U_left)
103
+ U_right = torch.kron(self.D_one_half(R_e3nn), U_right)
104
+ openmx2wiki_left = torch.block_diag(openmx2wiki_left, openmx2wiki_left)
105
+ openmx2wiki_right = torch.block_diag(openmx2wiki_right, openmx2wiki_right)
106
+ return openmx2wiki_left.T @ U_left.transpose(-1, -2).conj() @ openmx2wiki_left @ H \
107
+ @ openmx2wiki_right.T @ U_right @ openmx2wiki_right
108
+
109
+ def rotate_openmx_phiVdphi(self, phiVdphi, R, l_lefts, l_rights, order_xyz=True):
110
+ if self.spinful:
111
+ raise NotImplementedError
112
+ assert phiVdphi.shape[-1] == 3
113
+ assert len(R.shape) == 2
114
+ if order_xyz:
115
+ R_e3nn = self.rotate_matrix_convert(R)
116
+ else:
117
+ R_e3nn = R
118
+ block_lefts = []
119
+ for l_left in l_lefts:
120
+ block_lefts.append(
121
+ self.Us_openmx2wiki[l_left].T @ Irrep(l_left, 1).D_from_matrix(R_e3nn) @ self.Us_openmx2wiki[l_left])
122
+ rotation_left = torch.block_diag(*block_lefts)
123
+
124
+ block_rights = []
125
+ for l_right in l_rights:
126
+ block_rights.append(
127
+ self.Us_openmx2wiki[l_right].T @ Irrep(l_right, 1).D_from_matrix(R_e3nn) @ self.Us_openmx2wiki[l_right])
128
+ rotation_right = torch.block_diag(*block_rights)
129
+
130
+ rotation_x = self.Us_openmx2wiki[1].T @ Irrep(1, 1).D_from_matrix(R_e3nn) @ self.Us_openmx2wiki[1]
131
+
132
+ return torch.einsum("def,da,eb,fc->abc", phiVdphi, rotation_left, rotation_right, rotation_x)
133
+
134
+ def wiki2openmx_H(self, H, l_left, l_right):
135
+ if self.spinful:
136
+ raise NotImplementedError
137
+ return self.Us_openmx2wiki[l_left].T @ H @ self.Us_openmx2wiki[l_right]
138
+
139
+ def openmx2wiki_H(self, H, l_left, l_right):
140
+ if self.spinful:
141
+ raise NotImplementedError
142
+ return self.Us_openmx2wiki[l_left] @ H @ self.Us_openmx2wiki[l_right].T
143
+
144
+ def rotate_matrix_convert(self, R):
145
+ return R.index_select(0, R.new_tensor([1, 2, 0]).int()).index_select(1, R.new_tensor([1, 2, 0]).int())
146
+
147
+ def D_one_half(self, R):
148
+ # writed by gongxx
149
+ assert self.spinful
150
+ d = torch.det(R).sign()
151
+ R = d[..., None, None] * R
152
+ k = (1 - d) / 2 # parity index
153
+ alpha, beta, gamma = matrix_to_angles(R)
154
+ J = torch.tensor([[1, 1], [1j, -1j]], dtype=self.dtype) / 1.4142135623730951 # <1/2 mz|1/2 my>
155
+ Uz1 = self._sp_z_rot(alpha)
156
+ Uy = J @ self._sp_z_rot(beta) @ J.T.conj()
157
+ Uz2 = self._sp_z_rot(gamma)
158
+ return Uz1 @ Uy @ Uz2
159
+
160
+ def _sp_z_rot(self, angle):
161
+ # writed by gongxx
162
+ assert self.spinful
163
+ M = torch.zeros([*angle.shape, 2, 2], dtype=self.dtype)
164
+ inds = torch.tensor([0, 1])
165
+ freqs = torch.tensor([0.5, -0.5], dtype=self.dtype)
166
+ M[..., inds, inds] = torch.exp(- freqs * (1j) * angle[..., None])
167
+ return M
168
+
169
+
170
+ def get_rh(input_dir, output_dir, target='hamiltonian'):
171
+ torch_device = torch.device('cpu')
172
+ assert target in ['hamiltonian', 'phiVdphi']
173
+ file_name = {
174
+ 'hamiltonian': 'hamiltonians.h5',
175
+ 'phiVdphi': 'phiVdphi.h5',
176
+ }[target]
177
+ prime_file_name = {
178
+ 'hamiltonian': 'rh.h5',
179
+ 'phiVdphi': 'rphiVdphi.h5',
180
+ }[target]
181
+ assert os.path.exists(os.path.join(input_dir, file_name))
182
+ assert os.path.exists(os.path.join(input_dir, 'rc.h5'))
183
+ assert os.path.exists(os.path.join(input_dir, 'orbital_types.dat'))
184
+ assert os.path.exists(os.path.join(input_dir, 'info.json'))
185
+
186
+ atom_num_orbital, orbital_types = load_orbital_types(os.path.join(input_dir, 'orbital_types.dat'),
187
+ return_orbital_types=True)
188
+ nsite = len(atom_num_orbital)
189
+ with open(os.path.join(input_dir, 'info.json'), 'r') as info_f:
190
+ info_dict = json.load(info_f)
191
+ spinful = info_dict["isspinful"]
192
+ fid_H = h5py.File(os.path.join(input_dir, file_name), 'r')
193
+ fid_rc = h5py.File(os.path.join(input_dir, 'rc.h5'), 'r')
194
+ fid_rh = h5py.File(os.path.join(output_dir, prime_file_name), 'w')
195
+ assert '[0, 0, 0, 1, 1]' in fid_H.keys()
196
+ h5_dtype = fid_H['[0, 0, 0, 1, 1]'].dtype
197
+ torch_dtype, torch_dtype_real, torch_dtype_complex = dtype_dict[h5_dtype.type]
198
+ rotate_kernel = Rotate(torch_dtype, torch_dtype_real=torch_dtype_real, torch_dtype_complex=torch_dtype_complex,
199
+ device=torch_device, spinful=spinful)
200
+
201
+ for key_str, hamiltonian in fid_H.items():
202
+ if key_str not in fid_rc:
203
+ warnings.warn(f'Hamiltonian matrix block ({key_str}) do not have local coordinate')
204
+ continue
205
+ rotation_matrix = torch.tensor(fid_rc[key_str], dtype=torch_dtype_real, device=torch_device)
206
+ key = json.loads(key_str)
207
+ atom_i = key[3] - 1
208
+ atom_j = key[4] - 1
209
+ assert atom_i >= 0
210
+ assert atom_i < nsite
211
+ assert atom_j >= 0
212
+ assert atom_j < nsite
213
+ if target == 'hamiltonian':
214
+ rotated_hamiltonian = rotate_kernel.rotate_openmx_H(torch.tensor(hamiltonian), rotation_matrix,
215
+ orbital_types[atom_i], orbital_types[atom_j])
216
+ elif target == 'phiVdphi':
217
+ rotated_hamiltonian = rotate_kernel.rotate_openmx_phiVdphi(torch.tensor(hamiltonian), rotation_matrix,
218
+ orbital_types[atom_i], orbital_types[atom_j])
219
+ fid_rh[key_str] = rotated_hamiltonian.numpy()
220
+
221
+ fid_H.close()
222
+ fid_rc.close()
223
+ fid_rh.close()
224
+
225
+
226
+ def rotate_back(input_dir, output_dir, target='hamiltonian'):
227
+ torch_device = torch.device('cpu')
228
+ assert target in ['hamiltonian', 'phiVdphi']
229
+ file_name = {
230
+ 'hamiltonian': 'hamiltonians_pred.h5',
231
+ 'phiVdphi': 'phiVdphi_pred.h5',
232
+ }[target]
233
+ prime_file_name = {
234
+ 'hamiltonian': 'rh_pred.h5',
235
+ 'phiVdphi': 'rphiVdphi_pred.h5',
236
+ }[target]
237
+ assert os.path.exists(os.path.join(input_dir, prime_file_name))
238
+ assert os.path.exists(os.path.join(input_dir, 'rc.h5'))
239
+ assert os.path.exists(os.path.join(input_dir, 'orbital_types.dat'))
240
+ assert os.path.exists(os.path.join(input_dir, 'info.json'))
241
+
242
+ atom_num_orbital, orbital_types = load_orbital_types(os.path.join(input_dir, 'orbital_types.dat'),
243
+ return_orbital_types=True)
244
+ nsite = len(atom_num_orbital)
245
+ with open(os.path.join(input_dir, 'info.json'), 'r') as info_f:
246
+ info_dict = json.load(info_f)
247
+ spinful = info_dict["isspinful"]
248
+ fid_rc = h5py.File(os.path.join(input_dir, 'rc.h5'), 'r')
249
+ fid_rh = h5py.File(os.path.join(input_dir, prime_file_name), 'r')
250
+ fid_H = h5py.File(os.path.join(output_dir, file_name), 'w')
251
+ assert '[0, 0, 0, 1, 1]' in fid_rh.keys()
252
+ h5_dtype = fid_rh['[0, 0, 0, 1, 1]'].dtype
253
+ torch_dtype, torch_dtype_real, torch_dtype_complex = dtype_dict[h5_dtype.type]
254
+ rotate_kernel = Rotate(torch_dtype, torch_dtype_real=torch_dtype_real, torch_dtype_complex=torch_dtype_complex,
255
+ device=torch_device, spinful=spinful)
256
+
257
+ for key_str, rotated_hamiltonian in fid_rh.items():
258
+ assert key_str in fid_rc
259
+ rotation_matrix = torch.tensor(fid_rc[key_str], dtype=torch_dtype_real, device=torch_device).T
260
+ key = json.loads(key_str)
261
+ atom_i = key[3] - 1
262
+ atom_j = key[4] - 1
263
+ assert atom_i >= 0
264
+ assert atom_i < nsite
265
+ assert atom_j >= 0
266
+ assert atom_j < nsite
267
+ if target == 'hamiltonian':
268
+ hamiltonian = rotate_kernel.rotate_openmx_H(torch.tensor(rotated_hamiltonian), rotation_matrix,
269
+ orbital_types[atom_i], orbital_types[atom_j])
270
+ elif target == 'phiVdphi':
271
+ hamiltonian = rotate_kernel.rotate_openmx_phiVdphi(torch.tensor(rotated_hamiltonian), rotation_matrix,
272
+ orbital_types[atom_i], orbital_types[atom_j])
273
+ fid_H[key_str] = hamiltonian.numpy()
274
+
275
+ fid_H.close()
276
+ fid_rc.close()
277
+ fid_rh.close()
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/__init__.py ADDED
File without changes
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (154 Bytes). View file
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/__pycache__/preprocess.cpython-312.pyc ADDED
Binary file (14.5 kB). View file
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/__pycache__/train.cpython-312.pyc ADDED
Binary file (1.36 kB). View file
 
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/evaluate.py ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import csv
2
+ import os
3
+ import argparse
4
+ import time
5
+ import warnings
6
+ from configparser import ConfigParser
7
+
8
+ import numpy as np
9
+ import torch
10
+ from pymatgen.core.structure import Structure
11
+
12
+ from deeph import get_graph, DeepHKernel, collate_fn
13
+
14
+
15
+ def main():
16
+ parser = argparse.ArgumentParser(description='Predict Hamiltonian')
17
+ parser.add_argument('--trained_model_dir', type=str,
18
+ help='path of trained model')
19
+ parser.add_argument('--input_dir', type=str,
20
+ help='')
21
+ parser.add_argument('--output_dir', type=str,
22
+ help='')
23
+ parser.add_argument('--disable_cuda', action='store_true', help='Disable CUDA')
24
+ parser.add_argument('--save_csv', action='store_true', help='Save the result for each edge in csv format')
25
+ parser.add_argument(
26
+ '--interface',
27
+ type=str,
28
+ default='h5',
29
+ choices=['h5', 'npz'])
30
+ parser.add_argument('--huge_structure', type=bool, default=False, help='')
31
+ args = parser.parse_args()
32
+
33
+ old_version = False
34
+ assert os.path.exists(os.path.join(args.trained_model_dir, 'config.ini'))
35
+ if os.path.exists(os.path.join(args.trained_model_dir, 'best_model.pt')) is False:
36
+ old_version = True
37
+ assert os.path.exists(os.path.join(args.trained_model_dir, 'best_model.pkl'))
38
+ assert os.path.exists(os.path.join(args.trained_model_dir, 'src'))
39
+
40
+ os.makedirs(args.output_dir, exist_ok=True)
41
+
42
+ config = ConfigParser()
43
+ config.read(os.path.join(os.path.dirname(os.path.dirname(__file__)), 'default.ini'))
44
+ config.read(os.path.join(args.trained_model_dir, 'config.ini'))
45
+ config.set('basic', 'save_dir', os.path.join(args.output_dir))
46
+ config.set('basic', 'disable_cuda', str(args.disable_cuda))
47
+ config.set('basic', 'save_to_time_folder', 'False')
48
+ config.set('basic', 'tb_writer', 'False')
49
+ config.set('train', 'pretrained', '')
50
+ config.set('train', 'resume', '')
51
+ kernel = DeepHKernel(config)
52
+ if old_version is False:
53
+ checkpoint = kernel.build_model(args.trained_model_dir, old_version)
54
+ else:
55
+ warnings.warn('You are using the trained model with an old version')
56
+ checkpoint = torch.load(
57
+ os.path.join(args.trained_model_dir, 'best_model.pkl'),
58
+ map_location=kernel.device
59
+ )
60
+ for key in ['index_to_Z', 'Z_to_index', 'spinful']:
61
+ if key in checkpoint:
62
+ setattr(kernel, key, checkpoint[key])
63
+ if hasattr(kernel, 'index_to_Z') is False:
64
+ kernel.index_to_Z = torch.arange(config.getint('basic', 'max_element') + 1)
65
+ if hasattr(kernel, 'Z_to_index') is False:
66
+ kernel.Z_to_index = torch.arange(config.getint('basic', 'max_element') + 1)
67
+ if hasattr(kernel, 'spinful') is False:
68
+ kernel.spinful = False
69
+ kernel.num_species = len(kernel.index_to_Z)
70
+ print("=> load best checkpoint (epoch {})".format(checkpoint['epoch']))
71
+ print(f"=> Atomic types: {kernel.index_to_Z.tolist()}, "
72
+ f"spinful: {kernel.spinful}, the number of atomic types: {len(kernel.index_to_Z)}.")
73
+ kernel.build_model(args.trained_model_dir, old_version)
74
+ kernel.model.load_state_dict(checkpoint['state_dict'])
75
+
76
+ with torch.no_grad():
77
+ input_dir = args.input_dir
78
+ structure = Structure(np.loadtxt(os.path.join(args.input_dir, 'lat.dat')).T,
79
+ np.loadtxt(os.path.join(args.input_dir, 'element.dat')),
80
+ np.loadtxt(os.path.join(args.input_dir, 'site_positions.dat')).T,
81
+ coords_are_cartesian=True,
82
+ to_unit_cell=False)
83
+ cart_coords = torch.tensor(structure.cart_coords, dtype=torch.get_default_dtype())
84
+ frac_coords = torch.tensor(structure.frac_coords, dtype=torch.get_default_dtype())
85
+ numbers = kernel.Z_to_index[torch.tensor(structure.atomic_numbers)]
86
+ structure.lattice.matrix.setflags(write=True)
87
+ lattice = torch.tensor(structure.lattice.matrix, dtype=torch.get_default_dtype())
88
+ inv_lattice = torch.inverse(lattice)
89
+
90
+ if os.path.exists(os.path.join(input_dir, 'graph.pkl')):
91
+ data = torch.load(os.path.join(input_dir, 'graph.pkl'))
92
+ print(f"Load processed graph from {os.path.join(input_dir, 'graph.pkl')}")
93
+ else:
94
+ begin = time.time()
95
+ data = get_graph(cart_coords, frac_coords, numbers, 0,
96
+ r=kernel.config.getfloat('graph', 'radius'),
97
+ max_num_nbr=kernel.config.getint('graph', 'max_num_nbr'),
98
+ numerical_tol=1e-8, lattice=lattice, default_dtype_torch=torch.get_default_dtype(),
99
+ tb_folder=args.input_dir, interface=args.interface,
100
+ num_l=kernel.config.getint('network', 'num_l'),
101
+ create_from_DFT=kernel.config.getboolean('graph', 'create_from_DFT', fallback=True),
102
+ if_lcmp_graph=kernel.config.getboolean('graph', 'if_lcmp_graph', fallback=True),
103
+ separate_onsite=kernel.separate_onsite,
104
+ target=kernel.config.get('basic', 'target'), huge_structure=args.huge_structure)
105
+ torch.save(data, os.path.join(input_dir, 'graph.pkl'))
106
+ print(f"Save processed graph to {os.path.join(input_dir, 'graph.pkl')}, cost {time.time() - begin} seconds")
107
+
108
+ dataset_mask = kernel.make_mask([data])
109
+ batch, subgraph = collate_fn(dataset_mask)
110
+ sub_atom_idx, sub_edge_idx, sub_edge_ang, sub_index = subgraph
111
+
112
+ output = kernel.model(batch.x.to(kernel.device), batch.edge_index.to(kernel.device),
113
+ batch.edge_attr.to(kernel.device),
114
+ batch.batch.to(kernel.device),
115
+ sub_atom_idx.to(kernel.device), sub_edge_idx.to(kernel.device),
116
+ sub_edge_ang.to(kernel.device), sub_index.to(kernel.device),
117
+ huge_structure=args.huge_structure)
118
+
119
+ label = batch.label
120
+ mask = batch.mask
121
+ output = output.cpu().reshape(label.shape)
122
+
123
+ assert label.shape == output.shape == mask.shape
124
+ mse = torch.pow(label - output, 2)
125
+ mae = torch.abs(label - output)
126
+
127
+ print()
128
+ for index_orb, orbital_single in enumerate(kernel.orbital):
129
+ if index_orb != 0:
130
+ print('================================================================')
131
+ print('orbital:', orbital_single)
132
+ if kernel.spinful == False:
133
+ print(f'mse: {torch.masked_select(mse[:, index_orb], mask[:, index_orb]).mean().item()}, '
134
+ f'mae: {torch.masked_select(mae[:, index_orb], mask[:, index_orb]).mean().item()}')
135
+ else:
136
+ for index_soc, str_soc in enumerate([
137
+ 'left_up_real', 'left_up_imag', 'right_down_real', 'right_down_imag',
138
+ 'right_up_real', 'right_up_imag', 'left_down_real', 'left_down_imag',
139
+ ]):
140
+ if index_soc != 0:
141
+ print('----------------------------------------------------------------')
142
+ print(str_soc, ':')
143
+ index_out = index_orb * 8 + index_soc
144
+ print(f'mse: {torch.masked_select(mse[:, index_out], mask[:, index_out]).mean().item()}, '
145
+ f'mae: {torch.masked_select(mae[:, index_out], mask[:, index_out]).mean().item()}')
146
+
147
+ if args.save_csv:
148
+ edge_stru_index = torch.squeeze(batch.batch[batch.edge_index[0]]).numpy()
149
+ edge_slices = torch.tensor(batch.__slices__['x'])[edge_stru_index].view(-1, 1)
150
+ atom_ids = torch.squeeze(batch.edge_index.T - edge_slices).tolist()
151
+ atomic_numbers = torch.squeeze(kernel.index_to_Z[batch.x[batch.edge_index.T]]).tolist()
152
+ edge_infos = torch.squeeze(batch.edge_attr[:, :7].detach().cpu()).tolist()
153
+
154
+ with open(os.path.join(kernel.config.get('basic', 'save_dir'), 'error_distance.csv'), 'w', newline='') as f:
155
+ writer = csv.writer(f)
156
+ writer.writerow(['index', 'atom_id', 'atomic_number', 'dist', 'atom1_x', 'atom1_y', 'atom1_z',
157
+ 'atom2_x', 'atom2_y', 'atom2_z']
158
+ + ['target'] * kernel.out_fea_len + ['pred'] * kernel.out_fea_len + [
159
+ 'mask'] * kernel.out_fea_len)
160
+ for index_edge in range(batch.edge_attr.shape[0]):
161
+ writer.writerow([
162
+ index_edge,
163
+ atom_ids[index_edge],
164
+ atomic_numbers[index_edge],
165
+ *(edge_infos[index_edge]),
166
+ *(label[index_edge].tolist()),
167
+ *(output[index_edge].tolist()),
168
+ *(mask[index_edge].tolist()),
169
+ ])
170
+
171
+
172
+ if __name__ == '__main__':
173
+ main()
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/inference.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import time
3
+ import subprocess as sp
4
+ import json
5
+
6
+ import argparse
7
+
8
+ from deeph import get_inference_config, rotate_back, abacus_parse
9
+ from deeph.preprocess import openmx_parse_overlap, get_rc
10
+ from deeph.inference import predict, predict_with_grad
11
+
12
+
13
+ def main():
14
+ parser = argparse.ArgumentParser(description='Deep Hamiltonian')
15
+ parser.add_argument('--config', default=[], nargs='+', type=str, metavar='N')
16
+ args = parser.parse_args()
17
+
18
+ print(f'User config name: {args.config}')
19
+ config = get_inference_config(args.config)
20
+
21
+ work_dir = os.path.abspath(config.get('basic', 'work_dir'))
22
+ OLP_dir = os.path.abspath(config.get('basic', 'OLP_dir'))
23
+ interface = config.get('basic', 'interface')
24
+ abacus_suffix = str(config.get('basic', 'abacus_suffix', fallback='ABACUS'))
25
+ task = json.loads(config.get('basic', 'task'))
26
+ assert isinstance(task, list)
27
+ eigen_solver = config.get('basic', 'eigen_solver')
28
+ disable_cuda = config.getboolean('basic', 'disable_cuda')
29
+ device = config.get('basic', 'device')
30
+ huge_structure = config.getboolean('basic', 'huge_structure')
31
+ restore_blocks_py = config.getboolean('basic', 'restore_blocks_py')
32
+ gen_rc_idx = config.getboolean('basic', 'gen_rc_idx')
33
+ gen_rc_by_idx = config.get('basic', 'gen_rc_by_idx')
34
+ with_grad = config.getboolean('basic', 'with_grad')
35
+ julia_interpreter = config.get('interpreter', 'julia_interpreter', fallback='')
36
+ python_interpreter = config.get('interpreter', 'python_interpreter', fallback='')
37
+ radius = config.getfloat('graph', 'radius')
38
+
39
+ if 5 in task:
40
+ if eigen_solver in ['sparse_jl', 'dense_jl']:
41
+ assert julia_interpreter, "Please specify julia_interpreter to use Julia code to calculate eigenpairs"
42
+ elif eigen_solver in ['dense_py']:
43
+ assert python_interpreter, "Please specify python_interpreter to use Python code to calculate eigenpairs"
44
+ else:
45
+ raise ValueError(f"Unknown eigen_solver: {eigen_solver}")
46
+ if 3 in task and not restore_blocks_py:
47
+ assert julia_interpreter, "Please specify julia_interpreter to use Julia code to rearrange matrix blocks"
48
+
49
+ if with_grad:
50
+ assert restore_blocks_py is True
51
+ assert 4 not in task
52
+ assert 5 not in task
53
+
54
+ os.makedirs(work_dir, exist_ok=True)
55
+ config.write(open(os.path.join(work_dir, 'config.ini'), "w"))
56
+
57
+
58
+ if not restore_blocks_py:
59
+ cmd3_post = f"{julia_interpreter} " \
60
+ f"{os.path.join(os.path.dirname(os.path.dirname(__file__)), 'inference', 'restore_blocks.jl')} " \
61
+ f"--input_dir {work_dir} --output_dir {work_dir}"
62
+
63
+ if eigen_solver == 'sparse_jl':
64
+ cmd5 = f"{julia_interpreter} " \
65
+ f"{os.path.join(os.path.dirname(os.path.dirname(__file__)), 'inference', 'sparse_calc.jl')} " \
66
+ f"--input_dir {work_dir} --output_dir {work_dir} --config {config.get('basic', 'sparse_calc_config')}"
67
+ elif eigen_solver == 'dense_jl':
68
+ cmd5 = f"{julia_interpreter} " \
69
+ f"{os.path.join(os.path.dirname(os.path.dirname(__file__)), 'inference', 'dense_calc.jl')} " \
70
+ f"--input_dir {work_dir} --output_dir {work_dir} --config {config.get('basic', 'sparse_calc_config')}"
71
+ elif eigen_solver == 'dense_py':
72
+ cmd5 = f"{python_interpreter} " \
73
+ f"{os.path.join(os.path.dirname(os.path.dirname(__file__)), 'inference', 'dense_calc.py')} " \
74
+ f"--input_dir {work_dir} --output_dir {work_dir} --config {config.get('basic', 'sparse_calc_config')}"
75
+ else:
76
+ raise ValueError(f"Unknown eigen_solver: {eigen_solver}")
77
+
78
+ print(f"\n~~~~~~~ 1.parse_Overlap\n")
79
+ print(f"\n~~~~~~~ 2.get_local_coordinate\n")
80
+ print(f"\n~~~~~~~ 3.get_pred_Hamiltonian\n")
81
+ if not restore_blocks_py:
82
+ print(f"\n~~~~~~~ 3_post.restore_blocks, command: \n{cmd3_post}\n")
83
+ print(f"\n~~~~~~~ 4.rotate_back\n")
84
+ print(f"\n~~~~~~~ 5.sparse_calc, command: \n{cmd5}\n")
85
+
86
+ if 1 in task:
87
+ begin = time.time()
88
+ print(f"\n####### Begin 1.parse_Overlap")
89
+ if interface == 'openmx':
90
+ assert os.path.exists(os.path.join(OLP_dir, 'openmx.out')), "Necessary files could not be found in OLP_dir"
91
+ assert os.path.exists(os.path.join(OLP_dir, 'output')), "Necessary files could not be found in OLP_dir"
92
+ openmx_parse_overlap(OLP_dir, work_dir)
93
+ elif interface == 'abacus':
94
+ print("Output subdirectories:", "OUT." + abacus_suffix)
95
+ assert os.path.exists(os.path.join(OLP_dir, 'SR.csr')), "Necessary files could not be found in OLP_dir"
96
+ assert os.path.exists(os.path.join(OLP_dir, f'OUT.{abacus_suffix}')), "Necessary files could not be found in OLP_dir"
97
+ abacus_parse(OLP_dir, work_dir, data_name=f'OUT.{abacus_suffix}', only_S=True)
98
+ assert os.path.exists(os.path.join(work_dir, "overlaps.h5"))
99
+ assert os.path.exists(os.path.join(work_dir, "lat.dat"))
100
+ assert os.path.exists(os.path.join(work_dir, "rlat.dat"))
101
+ assert os.path.exists(os.path.join(work_dir, "site_positions.dat"))
102
+ assert os.path.exists(os.path.join(work_dir, "orbital_types.dat"))
103
+ assert os.path.exists(os.path.join(work_dir, "element.dat"))
104
+ print('\n******* Finish 1.parse_Overlap, cost %d seconds\n' % (time.time() - begin))
105
+
106
+ if not with_grad and 2 in task:
107
+ begin = time.time()
108
+ print(f"\n####### Begin 2.get_local_coordinate")
109
+ get_rc(work_dir, work_dir, radius=radius, gen_rc_idx=gen_rc_idx, gen_rc_by_idx=gen_rc_by_idx,
110
+ create_from_DFT=config.getboolean('graph', 'create_from_DFT'))
111
+ assert os.path.exists(os.path.join(work_dir, "rc.h5"))
112
+ print('\n******* Finish 2.get_local_coordinate, cost %d seconds\n' % (time.time() - begin))
113
+
114
+ if 3 in task:
115
+ begin = time.time()
116
+ print(f"\n####### Begin 3.get_pred_Hamiltonian")
117
+ trained_model_dir = config.get('basic', 'trained_model_dir')
118
+ if trained_model_dir[0] == '[' and trained_model_dir[-1] == ']':
119
+ trained_model_dir = json.loads(trained_model_dir)
120
+ if with_grad:
121
+ predict_with_grad(input_dir=work_dir, output_dir=work_dir, disable_cuda=disable_cuda, device=device,
122
+ huge_structure=huge_structure, trained_model_dirs=trained_model_dir)
123
+ else:
124
+ predict(input_dir=work_dir, output_dir=work_dir, disable_cuda=disable_cuda, device=device,
125
+ huge_structure=huge_structure, restore_blocks_py=restore_blocks_py,
126
+ trained_model_dirs=trained_model_dir)
127
+ if restore_blocks_py:
128
+ if with_grad:
129
+ assert os.path.exists(os.path.join(work_dir, "hamiltonians_grad_pred.h5"))
130
+ assert os.path.exists(os.path.join(work_dir, "hamiltonians_pred.h5"))
131
+ else:
132
+ assert os.path.exists(os.path.join(work_dir, "rh_pred.h5"))
133
+ else:
134
+ capture_output = sp.run(cmd3_post, shell=True, capture_output=False, encoding="utf-8")
135
+ assert capture_output.returncode == 0
136
+ assert os.path.exists(os.path.join(work_dir, "rh_pred.h5"))
137
+ print('\n******* Finish 3.get_pred_Hamiltonian, cost %d seconds\n' % (time.time() - begin))
138
+
139
+ if 4 in task:
140
+ begin = time.time()
141
+ print(f"\n####### Begin 4.rotate_back")
142
+ rotate_back(input_dir=work_dir, output_dir=work_dir)
143
+ assert os.path.exists(os.path.join(work_dir, "hamiltonians_pred.h5"))
144
+ print('\n******* Finish 4.rotate_back, cost %d seconds\n' % (time.time() - begin))
145
+
146
+ if 5 in task:
147
+ begin = time.time()
148
+ print(f"\n####### Begin 5.sparse_calc")
149
+ capture_output = sp.run(cmd5, shell=True, capture_output=False, encoding="utf-8")
150
+ assert capture_output.returncode == 0
151
+ if eigen_solver in ['sparse_jl']:
152
+ assert os.path.exists(os.path.join(work_dir, "sparse_matrix.jld"))
153
+ print('\n******* Finish 5.sparse_calc, cost %d seconds\n' % (time.time() - begin))
154
+
155
+
156
+ if __name__ == '__main__':
157
+ main()
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/preprocess.py ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import subprocess as sp
3
+ import time
4
+
5
+ import numpy as np
6
+ import argparse
7
+ from pathos.multiprocessing import ProcessingPool as Pool
8
+
9
+ from deeph import get_preprocess_config, get_rc, get_rh, abacus_parse, siesta_parse
10
+
11
+
12
+ def collect_magmom_from_openmx(input_dir, output_dir, num_atom, mag_element):
13
+ magmom_data = np.zeros((num_atom, 4))
14
+
15
+ cmd = f'grep --text -A {num_atom + 3} "Total spin moment" {os.path.join(input_dir, "openmx.scfout")}'
16
+ magmom_str = os.popen(cmd).read().splitlines()
17
+ # print("Total local magnetic moment:", magmom_str[0].split()[4])
18
+
19
+ for index in range(num_atom):
20
+ line = magmom_str[3 + index].split()
21
+ assert line[0] == str(index + 1)
22
+ element_str = line[1]
23
+ magmom_r = line[5]
24
+ magmom_theta = line[6]
25
+ magmom_phi = line[7]
26
+ magmom_data[index] = int(element_str in mag_element), magmom_r, magmom_theta, magmom_phi
27
+
28
+ np.savetxt(os.path.join(output_dir, "magmom.txt"), magmom_data)
29
+
30
+ def collect_magmom_from_abacus(input_dir, output_dir, abacus_suffix, num_atom, mag_element): #to use this feature, be sure to turn out_chg and out_mul in abacus INPUT file, if not, will use mag setting in STRU file, and this may loss accuracy or incorrect
31
+ magmom_data = np.zeros((num_atom, 4))
32
+
33
+ # using running_scf.log file with INPUT file out_chg and out_mul == 1
34
+ cmd = f"grep 'Total Magnetism' {os.path.join(input_dir, 'OUT.' + abacus_suffix, 'running_scf.log')}"
35
+ datas = os.popen(cmd).read().strip().splitlines()
36
+ if datas:
37
+ for index, data in enumerate(datas):
38
+ element_str = data.split()[4]
39
+ x, y, z = map(float, data.split('(')[-1].split(')')[0].split(','))
40
+ vector = np.array([x, y, z])
41
+ r = np.linalg.norm(vector)
42
+ theta = np.degrees(np.arctan2(vector[1], vector[0]))
43
+ phi = np.degrees(np.arccos(vector[2] / r))
44
+ magmom_data[index] = int(element_str in mag_element), r, theta, phi
45
+ else: # using STRU file magmom setting, THIS MAY CAUSE WRONG OUTPUT!
46
+ index_atom = 0
47
+ with open(os.path.join(input_dir, "STRU"), 'r') as file:
48
+ lines = file.readlines()
49
+ for k in range(len(lines)): # k = line index
50
+ if lines[k].strip() == 'ATOMIC_POSITIONS':
51
+ kk = k + 2 # kk = current line index
52
+ while kk < len(lines):
53
+ if lines[kk] == "\n": # for if empty line between two elements, as ABACUS accepts
54
+ continue
55
+ element_str = lines[kk].strip()
56
+ element_amount = int(lines[kk + 2].strip())
57
+ for j in range(element_amount):
58
+ line = lines[kk + 3 + j].strip().split()
59
+ if len(line) < 11: # check if magmom is included
60
+ raise ValueError('this line do not contain magmom: {} in this file: {}'.format(line, input_dir))
61
+ if line[7] != "angle1" and line[8] != "angle1": # check if magmom is in angle mode
62
+ raise ValueError('mag in STRU should be mag * angle1 * angle2 *')
63
+ if line[6] == "mag": # for if 'm' is included
64
+ index_str = 7
65
+ else:
66
+ index_str = 8
67
+ magmom_data[index_atom] = int(element_str in mag_element), line[index_str], line[index_str + 2], line[index_str + 4]
68
+ index_atom += 1
69
+ kk += 3 + element_amount
70
+
71
+ np.savetxt(os.path.join(output_dir, "magmom.txt"), magmom_data)
72
+
73
+ def main():
74
+ parser = argparse.ArgumentParser(description='Deep Hamiltonian')
75
+ parser.add_argument('--config', default=[], nargs='+', type=str, metavar='N')
76
+ args = parser.parse_args()
77
+
78
+ print(f'User config name: {args.config}')
79
+ config = get_preprocess_config(args.config)
80
+
81
+ raw_dir = os.path.abspath(config.get('basic', 'raw_dir'))
82
+ processed_dir = os.path.abspath(config.get('basic', 'processed_dir'))
83
+ abacus_suffix = str(config.get('basic', 'abacus_suffix', fallback='ABACUS'))
84
+ target = config.get('basic', 'target')
85
+ interface = config.get('basic', 'interface')
86
+ local_coordinate = config.getboolean('basic', 'local_coordinate')
87
+ multiprocessing = config.getint('basic', 'multiprocessing')
88
+ get_S = config.getboolean('basic', 'get_S')
89
+
90
+ julia_interpreter = config.get('interpreter', 'julia_interpreter')
91
+
92
+ def make_cmd(input_dir, output_dir, target, interface, get_S):
93
+ if interface == 'openmx':
94
+ if target == 'hamiltonian':
95
+ cmd = f"{julia_interpreter} " \
96
+ f"{os.path.join(os.path.dirname(os.path.dirname(__file__)), 'preprocess', 'openmx_get_data.jl')} " \
97
+ f"--input_dir {input_dir} --output_dir {output_dir} --save_overlap {str(get_S).lower()}"
98
+ elif target == 'density_matrix':
99
+ cmd = f"{julia_interpreter} " \
100
+ f"{os.path.join(os.path.dirname(os.path.dirname(__file__)), 'preprocess', 'openmx_get_data.jl')} " \
101
+ f"--input_dir {input_dir} --output_dir {output_dir} --save_overlap {str(get_S).lower()} --if_DM true"
102
+ else:
103
+ raise ValueError('Unknown target: {}'.format(target))
104
+ elif interface == 'siesta' or interface == 'abacus':
105
+ cmd = ''
106
+ elif interface == 'aims':
107
+ cmd = f"{julia_interpreter} " \
108
+ f"{os.path.join(os.path.dirname(os.path.dirname(__file__)), 'preprocess', 'aims_get_data.jl')} " \
109
+ f"--input_dir {input_dir} --output_dir {output_dir} --save_overlap {str(get_S).lower()}"
110
+ else:
111
+ raise ValueError('Unknown interface: {}'.format(interface))
112
+ return cmd
113
+
114
+ os.chdir(raw_dir)
115
+ relpath_list = []
116
+ abspath_list = []
117
+ for root, dirs, files in os.walk('./'):
118
+ if (interface == 'openmx' and 'openmx.scfout' in files) or (
119
+ interface == 'abacus' and 'OUT.' + abacus_suffix in dirs) or (
120
+ interface == 'siesta' and any(['.HSX' in ifile for ifile in files])) or (
121
+ interface == 'aims' and 'NoTB.dat' in files):
122
+ relpath_list.append(root)
123
+ abspath_list.append(os.path.abspath(root))
124
+
125
+ os.makedirs(processed_dir, exist_ok=True)
126
+ os.chdir(processed_dir)
127
+ print(f"Found {len(abspath_list)} directories to preprocess")
128
+
129
+ def worker(index):
130
+ time_cost = time.time() - begin_time
131
+ current_block = index // nodes
132
+ if current_block < 1:
133
+ time_estimate = '?'
134
+ else:
135
+ num_blocks = (len(abspath_list) + nodes - 1) // nodes
136
+ time_estimate = time.localtime(time_cost / (current_block) * (num_blocks - current_block))
137
+ time_estimate = time.strftime("%H:%M:%S", time_estimate)
138
+ print(f'\rPreprocessing No. {index + 1}/{len(abspath_list)} '
139
+ f'[{time.strftime("%H:%M:%S", time.localtime(time_cost))}<{time_estimate}]...', end='')
140
+ abspath = abspath_list[index]
141
+ relpath = relpath_list[index]
142
+ os.makedirs(relpath, exist_ok=True)
143
+ cmd = make_cmd(
144
+ abspath,
145
+ os.path.abspath(relpath),
146
+ target=target,
147
+ interface=interface,
148
+ get_S=get_S,
149
+ )
150
+ capture_output = sp.run(cmd, shell=True, capture_output=True, encoding="utf-8")
151
+ if capture_output.returncode != 0:
152
+ with open(os.path.join(os.path.abspath(relpath), 'error.log'), 'w') as f:
153
+ f.write(f'[stdout of cmd "{cmd}"]:\n\n{capture_output.stdout}\n\n\n'
154
+ f'[stderr of cmd "{cmd}"]:\n\n{capture_output.stderr}')
155
+ print(f'\nFailed to preprocess: {abspath}, '
156
+ f'log file was saved to {os.path.join(os.path.abspath(relpath), "error.log")}')
157
+ return
158
+
159
+ if interface == 'abacus':
160
+ print("Output subdirectories:", "OUT." + abacus_suffix)
161
+ abacus_parse(abspath, os.path.abspath(relpath), 'OUT.' + abacus_suffix)
162
+ elif interface == 'siesta':
163
+ siesta_parse(abspath, os.path.abspath(relpath))
164
+ if local_coordinate:
165
+ get_rc(os.path.abspath(relpath), os.path.abspath(relpath), radius=config.getfloat('graph', 'radius'),
166
+ r2_rand=config.getboolean('graph', 'r2_rand'),
167
+ create_from_DFT=config.getboolean('graph', 'create_from_DFT'), neighbour_file='hamiltonians.h5')
168
+ get_rh(os.path.abspath(relpath), os.path.abspath(relpath), target)
169
+ if config.getboolean('magnetic_moment', 'parse_magnetic_moment'):
170
+ num_atom = np.loadtxt(os.path.join(os.path.abspath(relpath), 'element.dat')).shape[0]
171
+ if interface == 'openmx':
172
+ collect_magmom_from_openmx(
173
+ abspath, os.path.abspath(relpath),
174
+ num_atom, eval(config.get('magnetic_moment', 'magnetic_element')))
175
+ elif interface == 'abacus':
176
+ collect_magmom_from_abacus(
177
+ abspath, os.path.abspath(relpath), abacus_suffix,
178
+ num_atom, eval(config.get('magnetic_moment', 'magnetic_element')))
179
+ else:
180
+ raise ValueError('Magnetic moment can only be parsed from OpenMX or ABACUS output for now, but your interface is {}'.format(interface))
181
+
182
+ begin_time = time.time()
183
+ if multiprocessing != 0:
184
+ if multiprocessing > 0:
185
+ pool_dict = {'nodes': multiprocessing}
186
+ else:
187
+ pool_dict = {}
188
+ with Pool(**pool_dict) as pool:
189
+ nodes = pool.nodes
190
+ print(f'Use multiprocessing (nodes = {nodes})')
191
+ pool.map(worker, range(len(abspath_list)))
192
+ else:
193
+ nodes = 1
194
+ for index in range(len(abspath_list)):
195
+ worker(index)
196
+ print(f'\nPreprocess finished in {time.time() - begin_time:.2f} seconds')
197
+
198
+ if __name__ == '__main__':
199
+ main()
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/scripts/train.py ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+
3
+ from deeph import DeepHKernel, get_config
4
+
5
+
6
+ def main():
7
+ parser = argparse.ArgumentParser(description='Deep Hamiltonian')
8
+ parser.add_argument('--config', default=[], nargs='+', type=str, metavar='N')
9
+ args = parser.parse_args()
10
+
11
+ print(f'User config name: {args.config}')
12
+ config = get_config(args.config)
13
+ only_get_graph = config.getboolean('basic', 'only_get_graph')
14
+ kernel = DeepHKernel(config)
15
+ train_loader, val_loader, test_loader, transform = kernel.get_dataset(only_get_graph)
16
+ if only_get_graph:
17
+ return
18
+ kernel.build_model()
19
+ kernel.set_train()
20
+ kernel.train(train_loader, val_loader, test_loader)
21
+
22
+ if __name__ == '__main__':
23
+ main()
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/src/deeph/utils.py ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import shutil
3
+ import sys
4
+ from configparser import ConfigParser
5
+ from inspect import signature
6
+
7
+ import numpy as np
8
+ import scipy
9
+ import torch
10
+ from torch import nn, package
11
+ import h5py
12
+
13
+
14
+ def print_args(args):
15
+ for k, v in args._get_kwargs():
16
+ print('{} = {}'.format(k, v))
17
+ print('')
18
+
19
+
20
+ class Logger(object):
21
+ def __init__(self, filename):
22
+ self.terminal = sys.stdout
23
+ self.log = open(filename, "a", buffering=1)
24
+
25
+ def write(self, message):
26
+ self.terminal.write(message)
27
+ self.log.write(message)
28
+
29
+ def flush(self):
30
+ pass
31
+
32
+
33
+ class MaskMSELoss(nn.Module):
34
+ def __init__(self) -> None:
35
+ super(MaskMSELoss, self).__init__()
36
+
37
+ def forward(self, input: torch.Tensor, target: torch.Tensor, mask: torch.Tensor) -> torch.Tensor:
38
+ assert input.shape == target.shape == mask.shape
39
+ mse = torch.pow(input - target, 2)
40
+ mse = torch.masked_select(mse, mask).mean()
41
+
42
+ return mse
43
+
44
+
45
+ class MaskMAELoss(nn.Module):
46
+ def __init__(self) -> None:
47
+ super(MaskMAELoss, self).__init__()
48
+
49
+ def forward(self, input: torch.Tensor, target: torch.Tensor, mask: torch.Tensor) -> torch.Tensor:
50
+ assert input.shape == target.shape == mask.shape
51
+ mae = torch.abs(input - target)
52
+ mae = torch.masked_select(mae, mask).mean()
53
+
54
+ return mae
55
+
56
+
57
+ class LossRecord:
58
+ def __init__(self):
59
+ self.reset()
60
+
61
+ def reset(self):
62
+ self.last_val = 0
63
+ self.avg = 0
64
+ self.sum = 0
65
+ self.count = 0
66
+
67
+ def update(self, val, num=1):
68
+ self.last_val = val
69
+ self.sum += val * num
70
+ self.count += num
71
+ self.avg = self.sum / self.count
72
+
73
+
74
+ def if_integer(string):
75
+ try:
76
+ int(string)
77
+ return True
78
+ except ValueError:
79
+ return False
80
+
81
+
82
+ class Transform:
83
+ def __init__(self, tensor=None, mask=None, normalizer=False, boxcox=False):
84
+ self.normalizer = normalizer
85
+ self.boxcox = boxcox
86
+ if normalizer:
87
+ raise NotImplementedError
88
+ self.mean = abs(tensor).sum(dim=0) / mask.sum(dim=0)
89
+ self.std = None
90
+ print(f'[normalizer] mean: {self.mean}, std: {self.std}')
91
+ if boxcox:
92
+ raise NotImplementedError
93
+ _, self.opt_lambda = scipy.stats.boxcox(tensor.double())
94
+ print('[boxcox] optimal lambda value:', self.opt_lambda)
95
+
96
+ def tran(self, tensor):
97
+ if self.boxcox:
98
+ tensor = scipy.special.boxcox(tensor, self.opt_lambda)
99
+ if self.normalizer:
100
+ tensor = (tensor - self.mean) / self.std
101
+ return tensor
102
+
103
+ def inv_tran(self, tensor):
104
+ if self.normalizer:
105
+ tensor = tensor * self.std + self.mean
106
+ if self.boxcox:
107
+ tensor = scipy.special.inv_boxcox(tensor, self.opt_lambda)
108
+ return tensor
109
+
110
+ def state_dict(self):
111
+ result = {'normalizer': self.normalizer,
112
+ 'boxcox': self.boxcox}
113
+ if self.normalizer:
114
+ result['mean'] = self.mean
115
+ result['std'] = self.std
116
+ if self.boxcox:
117
+ result['opt_lambda'] = self.opt_lambda
118
+ return result
119
+
120
+ def load_state_dict(self, state_dict):
121
+ self.normalizer = state_dict['normalizer']
122
+ self.boxcox = state_dict['boxcox']
123
+ if self.normalizer:
124
+ self.mean = state_dict['mean']
125
+ self.std = state_dict['std']
126
+ print(f'Load state dict, mean: {self.mean}, std: {self.std}')
127
+ if self.boxcox:
128
+ self.opt_lambda = state_dict['opt_lambda']
129
+ print('Load state dict, optimal lambda value:', self.opt_lambda)
130
+
131
+
132
+ def save_model(state, model_dict, model_state_dict, path, is_best):
133
+ model_dir = os.path.join(path, 'model.pt')
134
+ package_dict = {}
135
+ if 'verbose' in list(signature(package.PackageExporter.__init__).parameters.keys()):
136
+ package_dict['verbose'] = False
137
+ with package.PackageExporter(model_dir, **package_dict) as exp:
138
+ exp.intern('deeph.**')
139
+ exp.extern([
140
+ 'scipy.**', 'numpy.**', 'torch_geometric.**', 'sklearn.**',
141
+ 'torch_scatter.**', 'torch_sparse.**', 'torch_sparse.**', 'torch_cluster.**', 'torch_spline_conv.**',
142
+ 'pyparsing', 'jinja2', 'sys', 'mkl', 'io', 'setuptools.**', 'rdkit.Chem', 'tqdm',
143
+ '__future__', '_operator', '_ctypes', 'six.moves.urllib', 'ase', 'matplotlib.pyplot', 'sympy', 'networkx',
144
+ ])
145
+ exp.save_pickle('checkpoint', 'model.pkl', state | model_dict)
146
+ torch.save(state | model_state_dict, os.path.join(path, 'state_dict.pkl'))
147
+ if is_best:
148
+ shutil.copyfile(os.path.join(path, 'model.pt'), os.path.join(path, 'best_model.pt'))
149
+ shutil.copyfile(os.path.join(path, 'state_dict.pkl'), os.path.join(path, 'best_state_dict.pkl'))
150
+
151
+
152
+ def write_ham_h5(hoppings_dict, path):
153
+ fid = h5py.File(path, "w")
154
+ for k, v in hoppings_dict.items():
155
+ fid[k] = v
156
+ fid.close()
157
+
158
+
159
+ def write_ham_npz(hoppings_dict, path):
160
+ np.savez(path, **hoppings_dict)
161
+
162
+
163
+ def write_ham(hoppings_dict, path):
164
+ os.makedirs(path, exist_ok=True)
165
+ for key_term, matrix in hoppings_dict.items():
166
+ np.savetxt(os.path.join(path, f'{key_term}_real.dat'), matrix)
167
+
168
+
169
+ def get_config(args):
170
+ config = ConfigParser()
171
+ config.read(os.path.join(os.path.dirname(__file__), 'default.ini'))
172
+ for config_file in args:
173
+ assert os.path.exists(config_file)
174
+ config.read(config_file)
175
+ if config['basic']['target'] == 'O_ij':
176
+ assert config['basic']['O_component'] in ['H_minimum', 'H_minimum_withNA', 'H', 'Rho']
177
+ if config['basic']['target'] == 'E_ij':
178
+ assert config['basic']['energy_component'] in ['xc', 'delta_ee', 'both', 'summation', 'E_ij']
179
+ else:
180
+ assert config['hyperparameter']['criterion'] in ['MaskMSELoss']
181
+ assert config['basic']['target'] in ['hamiltonian']
182
+ assert config['basic']['interface'] in ['h5', 'h5_rc_only', 'h5_Eij', 'npz', 'npz_rc_only']
183
+ assert config['network']['aggr'] in ['add', 'mean', 'max']
184
+ assert config['network']['distance_expansion'] in ['GaussianBasis', 'BesselBasis', 'ExpBernsteinBasis']
185
+ assert config['network']['normalization'] in ['BatchNorm', 'LayerNorm', 'PairNorm', 'InstanceNorm', 'GraphNorm',
186
+ 'DiffGroupNorm', 'None']
187
+ assert config['network']['atom_update_net'] in ['CGConv', 'GAT', 'PAINN']
188
+ assert config['hyperparameter']['optimizer'] in ['sgd', 'sgdm', 'adam', 'adamW', 'adagrad', 'RMSprop', 'lbfgs']
189
+ assert config['hyperparameter']['lr_scheduler'] in ['', 'MultiStepLR', 'ReduceLROnPlateau', 'CyclicLR']
190
+
191
+ return config
192
+
193
+
194
+ def get_inference_config(*args):
195
+ config = ConfigParser()
196
+ config.read(os.path.join(os.path.dirname(__file__), 'inference', 'inference_default.ini'))
197
+ for config_file in args:
198
+ config.read(config_file)
199
+ assert config['basic']['interface'] in ['openmx', 'abacus']
200
+
201
+ return config
202
+
203
+
204
+ def get_preprocess_config(*args):
205
+ config = ConfigParser()
206
+ config.read(os.path.join(os.path.dirname(__file__), 'preprocess', 'preprocess_default.ini'))
207
+ for config_file in args:
208
+ config.read(config_file)
209
+ assert config['basic']['target'] in ['hamiltonian', 'density_matrix', 'phiVdphi']
210
+ assert config['basic']['interface'] in ['openmx', 'abacus', 'aims', 'siesta']
211
+ assert if_integer(config['basic']['multiprocessing']), "value of multiprocessing must be an integer"
212
+
213
+ return config
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/pred_ham_std/stderr.txt ADDED
File without changes
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/rc.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02874eaa094e453bc3638de0efea7186ca0c1a9d9e212c1aaac2999d5343704c
3
+ size 1065104
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/rh.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5289fc9b19cad0a621bf3a85dd0af86c9541e23ae88b6536451a5e5831d0bef
3
+ size 4141696
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/rh_pred.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5ac5bd0de9f166e752756e87e1cb2924dcfd14758109b8096209173718860ab
3
+ size 4133504
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/rlat.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ -8.807380587617869017e-01 8.807380587617869017e-01 8.807380587617869017e-01
2
+ 8.807380587617869017e-01 -8.807380587617869017e-01 8.807380587617869017e-01
3
+ 8.807380587617869017e-01 8.807380587617869017e-01 -8.807380587617869017e-01
example/diamond/1_data_prepare/data/bands/sc/reconstruction/aohamiltonian/site_positions.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ 0.000000000000000000e+00 8.917499994284623366e-01 1.783499998856923785e+00 2.675249998285386344e+00 1.783499998856923785e+00 2.675249998285386344e+00 3.566999997713848014e+00 4.458749997142310129e+00 0.000000000000000000e+00 8.917499994284623366e-01 1.783499998856923785e+00 2.675249998285386344e+00 1.783499998856923785e+00 2.675249998285386344e+00 3.566999997713848014e+00 4.458749997142310129e+00
2
+ 0.000000000000000000e+00 8.917499994284623366e-01 1.783499998856923785e+00 2.675249998285386344e+00 0.000000000000000000e+00 8.917499994284623366e-01 1.783499998856923785e+00 2.675249998285386344e+00 1.783499998856923785e+00 2.675249998285386344e+00 3.566999997713848014e+00 4.458749997142310129e+00 1.783499998856923785e+00 2.675249998285386344e+00 3.566999997713848014e+00 4.458749997142310129e+00
3
+ 0.000000000000000000e+00 8.917499994284623366e-01 0.000000000000000000e+00 8.917499994284623366e-01 1.783499998856923785e+00 2.675249998285386344e+00 1.783499998856923785e+00 2.675249998285386344e+00 1.783499998856923785e+00 2.675249998285386344e+00 1.783499998856923785e+00 2.675249998285386344e+00 3.566999997713848014e+00 4.458749997142310129e+00 3.566999997713848014e+00 4.458749997142310129e+00
example/diamond/1_data_prepare/data/bands/sc/reconstruction/calc.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from HPRO import PW2AOkernel
2
+
3
+ kernel = PW2AOkernel(
4
+ lcao_interface='siesta',
5
+ lcaodata_root='../../../../../aobasis',
6
+ hrdata_interface='qe-bgw',
7
+ vscdir='../scf/VSC',
8
+ upfdir='../../../../../pseudos',
9
+ ecutwfn=30
10
+ )
11
+ kernel.run_pw2ao_rs('./aohamiltonian')
example/diamond/1_data_prepare/data/bands/sc/reconstruction/hpro.log ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ==============================================================================
3
+ Program HPRO
4
+ Author: Xiaoxun Gong (xiaoxun.gong@gmail.com)
5
+ ==============================================================================
6
+
7
+ Structure information:
8
+ Primitive lattice vectors (angstrom):
9
+ a = ( 0.0000000 3.5670000 3.5670000)
10
+ b = ( 3.5670000 0.0000000 3.5670000)
11
+ c = ( 3.5670000 3.5670000 0.0000000)
12
+ Atomic species and numbers in unit cell: C: 16.
13
+
14
+ Atomic orbital basis:
15
+ Format: siesta
16
+ Element C:
17
+ Orbital 1: l = 0, cutoff = 4.493 a.u., norm = 1.000
18
+ Orbital 2: l = 0, cutoff = 4.502 a.u., norm = 1.000
19
+ Orbital 3: l = 1, cutoff = 5.468 a.u., norm = 1.000
20
+ Orbital 4: l = 1, cutoff = 5.479 a.u., norm = 1.000
21
+ Orbital 5: l = 2, cutoff = 5.446 a.u., norm = 1.000
22
+
23
+ Real space grid dimensions: ( 48 48 48)
24
+
25
+ Pseudopotential projectors:
26
+ Format: qe
27
+ Element C:
28
+ Orbital 1: l = 0, cutoff = 1.310 a.u., norm = 1.000
29
+ Orbital 2: l = 0, cutoff = 1.310 a.u., norm = 1.000
30
+ Orbital 3: l = 1, cutoff = 1.310 a.u., norm = 1.000
31
+ Orbital 4: l = 1, cutoff = 1.310 a.u., norm = 1.000
32
+
33
+ IO done, total wall time = 0:00:00
34
+
35
+ ===============================================
36
+ Reconstructing PW Hamiltonian to AOs in real space
37
+ ===============================================
38
+
39
+ Calculating overlap
40
+
41
+ Writing overlap matrices to disk
42
+
43
+ Constructing Hamiltonian operator with 1184 blocks
44
+ 10%|████ | 119/1184 [00:18<02:49, 6.26it/s]
45
+ 20%|████████ | 238/1184 [00:33<02:07, 7.40it/s]
46
+ 30%|████████████ | 357/1184 [00:48<01:49, 7.55it/s]
47
+ 40%|████████████████ | 476/1184 [01:03<01:33, 7.60it/s]
48
+ 50%|████████████████████ | 595/1184 [01:20<01:19, 7.40it/s]
49
+ 60%|████████████████████████ | 714/1184 [01:35<01:02, 7.56it/s]
50
+ 70%|████████████████████████████▏ | 833/1184 [01:54<00:49, 7.09it/s]
51
+ 80%|████████████████████████████████▏ | 952/1184 [02:12<00:33, 6.93it/s]
52
+ 90%|████████████████████████████████████▏ | 1071/1184 [02:27<00:15, 7.25it/s]
53
+ 100%|████████████████████████████████████████| 1184/1184 [02:42<00:00, 7.27it/s]
54
+ Done, elapsed time: 162.8s.
55
+
56
+ Writing Hamiltonian matrices to disk
57
+
58
+ Job done, total wall time = 0:02:46
59
+