差别
这里会显示出您选择的修订版和当前版本之间的差别。
| 两侧同时换到之前的修订记录前一修订版后一修订版 | 前一修订版 | ||
| 人工智能:材料科学:chemeleon2:chemeleon2-模型加载基本情况 [2026/02/02 02:57] – ctbots | 人工智能:材料科学:chemeleon2:chemeleon2-模型加载基本情况 [2026/02/02 03:50] (当前版本) – ctbots | ||
|---|---|---|---|
| 行 1: | 行 1: | ||
| ====== chemeleon2-模型加载基本情况 ====== | ====== chemeleon2-模型加载基本情况 ====== | ||
| + | |||
| + | ===== 模型加载的差别 ===== | ||
| 原版的数据加载路径在: https:// | 原版的数据加载路径在: https:// | ||
| + | |||
| + | * checkpoints/ | ||
| + | * checkpoints/ | ||
| + | * checkpoints/ | ||
| + | |||
| + | |||
| + | ===== 数据集来源区别 ===== | ||
| + | |||
| + | ^ 方面 | ||
| + | | 数据来源 | ||
| + | | 结构数量 | ||
| + | | 原子数限制 | ||
| + | | 稳定性筛选 | ||
| + | | 数据多样性 | ||
| + | | 典型用途 | ||
| + | | 模型性能对比 | ||
| + | |||
| + | 本地快速验证,可以用 mp_20 ,如果是精度要求高可以用 alex_mp_20 | ||
| + | |||
| + | ===== VAE的基本结构 ===== | ||
| + | < | ||
| + | VAEModule( | ||
| + | (encoder): TransformerEncoder( | ||
| + | (atom_type_embedder): | ||
| + | (lattices_embedder): | ||
| + | (0): Linear(in_features=9, | ||
| + | (1): SiLU() | ||
| + | (2): Linear(in_features=512, | ||
| + | ) | ||
| + | (frac_coords_embedder): | ||
| + | (0): Linear(in_features=3, | ||
| + | (1): SiLU() | ||
| + | (2): Linear(in_features=512, | ||
| + | ) | ||
| + | (transformer): | ||
| + | (layers): ModuleList( | ||
| + | (0-7): 8 x TransformerEncoderLayer( | ||
| + | (self_attn): | ||
| + | (out_proj): NonDynamicallyQuantizableLinear(in_features=512, | ||
| + | ) | ||
| + | (linear1): Linear(in_features=512, | ||
| + | (dropout): Dropout(p=0.0, | ||
| + | (linear2): Linear(in_features=2048, | ||
| + | (norm1): LayerNorm((512, | ||
| + | (norm2): LayerNorm((512, | ||
| + | (dropout1): Dropout(p=0.0, | ||
| + | (dropout2): Dropout(p=0.0, | ||
| + | (activation): | ||
| + | ) | ||
| + | ) | ||
| + | (norm): LayerNorm((512, | ||
| + | ) | ||
| + | ) | ||
| + | (decoder): TransformerDecoder( | ||
| + | (transformer): | ||
| + | (layers): ModuleList( | ||
| + | (0-7): 8 x TransformerEncoderLayer( | ||
| + | (self_attn): | ||
| + | (out_proj): NonDynamicallyQuantizableLinear(in_features=512, | ||
| + | ) | ||
| + | (linear1): Linear(in_features=512, | ||
| + | (dropout): Dropout(p=0.0, | ||
| + | (linear2): Linear(in_features=2048, | ||
| + | (norm1): LayerNorm((512, | ||
| + | (norm2): LayerNorm((512, | ||
| + | (dropout1): Dropout(p=0.0, | ||
| + | (dropout2): Dropout(p=0.0, | ||
| + | (activation): | ||
| + | ) | ||
| + | ) | ||
| + | (norm): LayerNorm((512, | ||
| + | ) | ||
| + | (atom_types_head): | ||
| + | (frac_coords_head): | ||
| + | (lattice_head): | ||
| + | ) | ||
| + | (quant_conv): | ||
| + | (post_quant_conv): | ||
| + | ) | ||
| + | </ | ||
| + | |||
| + | ===== LDM结构模型打印 ===== | ||
| + | < | ||
| + | LDMModule( | ||
| + | (denoiser): DiT( | ||
| + | (x_embedder): | ||
| + | (t_embedder): | ||
| + | (mlp): Sequential( | ||
| + | (0): Linear(in_features=256, | ||
| + | (1): SiLU() | ||
| + | (2): Linear(in_features=768, | ||
| + | ) | ||
| + | ) | ||
| + | (blocks): ModuleList( | ||
| + | (0-11): 12 x DiTBlock( | ||
| + | (norm1): LayerNorm((768, | ||
| + | (attn): MultiheadAttention( | ||
| + | (out_proj): NonDynamicallyQuantizableLinear(in_features=768, | ||
| + | ) | ||
| + | (norm2): LayerNorm((768, | ||
| + | (mlp): Mlp( | ||
| + | (fc1): Linear(in_features=768, | ||
| + | (act): GELU(approximate=' | ||
| + | (drop1): Dropout(p=0, | ||
| + | (norm): Identity() | ||
| + | (fc2): Linear(in_features=3072, | ||
| + | (drop2): Dropout(p=0, | ||
| + | ) | ||
| + | (adaLN_modulation): | ||
| + | (0): SiLU() | ||
| + | (1): Linear(in_features=768, | ||
| + | ) | ||
| + | ) | ||
| + | ) | ||
| + | (final_layer): | ||
| + | (norm_final): | ||
| + | (linear): Linear(in_features=768, | ||
| + | (adaLN_modulation): | ||
| + | (0): SiLU() | ||
| + | (1): Linear(in_features=768, | ||
| + | ) | ||
| + | ) | ||
| + | ) | ||
| + | (vae): VAEModule( | ||
| + | (encoder): TransformerEncoder( | ||
| + | (atom_type_embedder): | ||
| + | (lattices_embedder): | ||
| + | (0): Linear(in_features=9, | ||
| + | (1): SiLU() | ||
| + | (2): Linear(in_features=512, | ||
| + | ) | ||
| + | (frac_coords_embedder): | ||
| + | (0): Linear(in_features=3, | ||
| + | (1): SiLU() | ||
| + | (2): Linear(in_features=512, | ||
| + | ) | ||
| + | (transformer): | ||
| + | (layers): ModuleList( | ||
| + | (0-7): 8 x TransformerEncoderLayer( | ||
| + | (self_attn): | ||
| + | (out_proj): NonDynamicallyQuantizableLinear(in_features=512, | ||
| + | ) | ||
| + | (linear1): Linear(in_features=512, | ||
| + | (dropout): Dropout(p=0.0, | ||
| + | (linear2): Linear(in_features=2048, | ||
| + | (norm1): LayerNorm((512, | ||
| + | (norm2): LayerNorm((512, | ||
| + | (dropout1): Dropout(p=0.0, | ||
| + | (dropout2): Dropout(p=0.0, | ||
| + | (activation): | ||
| + | ) | ||
| + | ) | ||
| + | (norm): LayerNorm((512, | ||
| + | ) | ||
| + | ) | ||
| + | (decoder): TransformerDecoder( | ||
| + | (transformer): | ||
| + | (layers): ModuleList( | ||
| + | (0-7): 8 x TransformerEncoderLayer( | ||
| + | (self_attn): | ||
| + | (out_proj): NonDynamicallyQuantizableLinear(in_features=512, | ||
| + | ) | ||
| + | (linear1): Linear(in_features=512, | ||
| + | (dropout): Dropout(p=0.0, | ||
| + | (linear2): Linear(in_features=2048, | ||
| + | (norm1): LayerNorm((512, | ||
| + | (norm2): LayerNorm((512, | ||
| + | (dropout1): Dropout(p=0.0, | ||
| + | (dropout2): Dropout(p=0.0, | ||
| + | (activation): | ||
| + | ) | ||
| + | ) | ||
| + | (norm): LayerNorm((512, | ||
| + | ) | ||
| + | (atom_types_head): | ||
| + | (frac_coords_head): | ||
| + | (lattice_head): | ||
| + | ) | ||
| + | (quant_conv): | ||
| + | (post_quant_conv): | ||
| + | ) | ||
| + | ) | ||
| + | |||
| + | </ | ||