You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix#4396. Fix#4397. Fix#4398. Throw errors for TF.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
## Release Notes
- **New Features**
- Introduced new parameters `numb_fparam` and `numb_aparam` for improved
fitting configurations in both dipole and polar fitting classes.
- Added methods to retrieve the values of the new parameters and
enhanced input requirement management.
- **Documentation**
- Updated training documentation to clarify the handling of new
parameters and their limitations in the TensorFlow backend.
- **Bug Fixes**
- Updated test configurations to reflect the new parameter structure,
ensuring consistency across tests for dipole and polar models.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
doc_numb_fparam="The dimension of the frame parameter. If set to >0, file `fparam.npy` should be included to provided the input fparams."
1599
+
doc_numb_aparam="The dimension of the atomic parameter. If set to >0, file `aparam.npy` should be included to provided the input aparams."
1598
1600
doc_neuron="The number of neurons in each hidden layers of the fitting net. When two hidden layers are of the same size, a skip connection is built."
1599
1601
doc_activation_function=f'The activation function in the fitting net. Supported activation functions are {list_to_doc(ACTIVATION_FN_DICT.keys())} Note that "gelu" denotes the custom operator version, and "gelu_tf" denotes the TF standard version. If you set "None" or "none" here, no activation function will be used.'
1600
1602
doc_resnet_dt='Whether to use a "Timestep" in the skip connection'
@@ -1609,6 +1611,20 @@ def fitting_polar():
1609
1611
doc_shift_diag="Whether to shift the diagonal of polar, which is beneficial to training. Default is true."
doc_numb_fparam="The dimension of the frame parameter. If set to >0, file `fparam.npy` should be included to provided the input fparams."
1669
+
doc_numb_aparam="The dimension of the atomic parameter. If set to >0, file `aparam.npy` should be included to provided the input aparams."
1652
1670
doc_neuron="The number of neurons in each hidden layers of the fitting net. When two hidden layers are of the same size, a skip connection is built."
1653
1671
doc_activation_function=f'The activation function in the fitting net. Supported activation functions are {list_to_doc(ACTIVATION_FN_DICT.keys())} Note that "gelu" denotes the custom operator version, and "gelu_tf" denotes the TF standard version. If you set "None" or "none" here, no activation function will be used.'
1654
1672
doc_resnet_dt='Whether to use a "Timestep" in the skip connection'
1655
1673
doc_precision=f"The precision of the fitting net parameters, supported options are {list_to_doc(PRECISION_DICT.keys())} Default follows the interface precision."
1656
1674
doc_sel_type="The atom types for which the atomic dipole will be provided. If not set, all types will be selected."
1657
1675
doc_seed="Random seed for parameter initialization of the fitting net"
Copy file name to clipboardExpand all lines: doc/model/train-fitting-tensor.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -247,3 +247,4 @@ During training, at each step when the `lcurve.out` is printed, the system used
247
247
248
248
To only fit against a subset of atomic types, in the TensorFlow backend, {ref}`fitting_net/sel_type <model[standard]/fitting_net[dipole]/sel_type>` should be set to selected types;
249
249
in other backends, {ref}`atom_exclude_types <model/atom_exclude_types>` should be set to excluded types.
250
+
The TensorFlow backend does not support {ref}`numb_fparam <model[standard]/fitting_net[dipole]/numb_fparam>` and {ref}`numb_aparam <model[standard]/fitting_net[dipole]/numb_aparam>`.
0 commit comments