chemeleon2-模型加载基本情况

模型加载的差别

原版的数据加载路径在: https://huggingface.co/hspark1212/chemeleon2-checkpoints

  • checkpoints/v0.0.1/alex_mp_20/vae/dng_j1jgz9t0_v1.ckpt 【alex mp-20数据集】对应VAE的模型
  • checkpoints/v0.0.1/alex_mp_20/ldm/ldm_rl_dng_tuor5vgd.ckpt 【alex mp-20数据集】 LDM 和RL处理过的模型
  • checkpoints/v0.0.1/mp_20/vae/dng_m4owq4i5_v0.ckpt 【mp-20数据集】VAE的模型,没有LDM

数据集来源区别

方面 mp_20 / alex_mp_20 (Alex-MP-20)
数据来源 仅来自 Materials Project (MP) 数据库 / 来自 Materials Project (MP) + Alexandria (Alex) 数据库
结构数量 约几万条(通常训练集 ~4–5万左右) / 约 60.7万 条(607,683 或 607,684 条)
原子数限制 单元格内 ≤20 个原子 / 单元格内 ≤20 个原子
稳定性筛选 一般筛选稳定结构(E_hull 较低) / 更严格:E_above_hull < 0.1 eV/atom
数据多样性 相对较小、覆盖面有限 / 显著更大、化学组成和结构类型更多样
典型用途 早期生成模型的基准(如 CDVAE、DiffCSP、MatterGen-MP 等) / MatterGen 等新一代大模型的训练集(性能显著提升)
模型性能对比 训练出来的模型生成 SUN 材料比例较低、RMSD 较高 / 训练后 SUN% 可提升 70% 左右,RMSD 下降 5 倍左右

本地快速验证,可以用 mp_20 ,如果是精度要求高可以用 alex_mp_20

VAE的基本结构

VAEModule(
  (encoder): TransformerEncoder(
    (atom_type_embedder): Embedding(100, 512)
    (lattices_embedder): Sequential(
      (0): Linear(in_features=9, out_features=512, bias=False)
      (1): SiLU()
      (2): Linear(in_features=512, out_features=512, bias=True)
    )
    (frac_coords_embedder): Sequential(
      (0): Linear(in_features=3, out_features=512, bias=False)
      (1): SiLU()
      (2): Linear(in_features=512, out_features=512, bias=True)
    )
    (transformer): TransformerEncoder(
      (layers): ModuleList(
        (0-7): 8 x TransformerEncoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=512, out_features=512, bias=True)
          )
          (linear1): Linear(in_features=512, out_features=2048, bias=True)
          (dropout): Dropout(p=0.0, inplace=False)
          (linear2): Linear(in_features=2048, out_features=512, bias=True)
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.0, inplace=False)
          (dropout2): Dropout(p=0.0, inplace=False)
          (activation): GELU(approximate='tanh')
        )
      )
      (norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
    )
  )
  (decoder): TransformerDecoder(
    (transformer): TransformerEncoder(
      (layers): ModuleList(
        (0-7): 8 x TransformerEncoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=512, out_features=512, bias=True)
          )
          (linear1): Linear(in_features=512, out_features=2048, bias=True)
          (dropout): Dropout(p=0.0, inplace=False)
          (linear2): Linear(in_features=2048, out_features=512, bias=True)
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.0, inplace=False)
          (dropout2): Dropout(p=0.0, inplace=False)
          (activation): GELU(approximate='tanh')
        )
      )
      (norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
    )
    (atom_types_head): Linear(in_features=512, out_features=100, bias=True)
    (frac_coords_head): Linear(in_features=512, out_features=3, bias=False)
    (lattice_head): Linear(in_features=512, out_features=6, bias=False)
  )
  (quant_conv): Linear(in_features=512, out_features=16, bias=False)
  (post_quant_conv): Linear(in_features=8, out_features=512, bias=False)
)

LDM结构模型打印

LDMModule(
  (denoiser): DiT(
    (x_embedder): Linear(in_features=8, out_features=768, bias=True)
    (t_embedder): TimestepEmbedder(
      (mlp): Sequential(
        (0): Linear(in_features=256, out_features=768, bias=True)
        (1): SiLU()
        (2): Linear(in_features=768, out_features=768, bias=True)
      )
    )
    (blocks): ModuleList(
      (0-11): 12 x DiTBlock(
        (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=False)
        (attn): MultiheadAttention(
          (out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
        )
        (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=False)
        (mlp): Mlp(
          (fc1): Linear(in_features=768, out_features=3072, bias=True)
          (act): GELU(approximate='tanh')
          (drop1): Dropout(p=0, inplace=False)
          (norm): Identity()
          (fc2): Linear(in_features=3072, out_features=768, bias=True)
          (drop2): Dropout(p=0, inplace=False)
        )
        (adaLN_modulation): Sequential(
          (0): SiLU()
          (1): Linear(in_features=768, out_features=4608, bias=True)
        )
      )
    )
    (final_layer): FinalLayer(
      (norm_final): LayerNorm((768,), eps=1e-06, elementwise_affine=False)
      (linear): Linear(in_features=768, out_features=16, bias=True)
      (adaLN_modulation): Sequential(
        (0): SiLU()
        (1): Linear(in_features=768, out_features=1536, bias=True)
      )
    )
  )
  (vae): VAEModule(
    (encoder): TransformerEncoder(
      (atom_type_embedder): Embedding(100, 512)
      (lattices_embedder): Sequential(
        (0): Linear(in_features=9, out_features=512, bias=False)
        (1): SiLU()
        (2): Linear(in_features=512, out_features=512, bias=True)
      )
      (frac_coords_embedder): Sequential(
        (0): Linear(in_features=3, out_features=512, bias=False)
        (1): SiLU()
        (2): Linear(in_features=512, out_features=512, bias=True)
      )
      (transformer): TransformerEncoder(
        (layers): ModuleList(
          (0-7): 8 x TransformerEncoderLayer(
            (self_attn): MultiheadAttention(
              (out_proj): NonDynamicallyQuantizableLinear(in_features=512, out_features=512, bias=True)
            )
            (linear1): Linear(in_features=512, out_features=2048, bias=True)
            (dropout): Dropout(p=0.0, inplace=False)
            (linear2): Linear(in_features=2048, out_features=512, bias=True)
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (dropout1): Dropout(p=0.0, inplace=False)
            (dropout2): Dropout(p=0.0, inplace=False)
            (activation): GELU(approximate='tanh')
          )
        )
        (norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
      )
    )
    (decoder): TransformerDecoder(
      (transformer): TransformerEncoder(
        (layers): ModuleList(
          (0-7): 8 x TransformerEncoderLayer(
            (self_attn): MultiheadAttention(
              (out_proj): NonDynamicallyQuantizableLinear(in_features=512, out_features=512, bias=True)
            )
            (linear1): Linear(in_features=512, out_features=2048, bias=True)
            (dropout): Dropout(p=0.0, inplace=False)
            (linear2): Linear(in_features=2048, out_features=512, bias=True)
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (dropout1): Dropout(p=0.0, inplace=False)
            (dropout2): Dropout(p=0.0, inplace=False)
            (activation): GELU(approximate='tanh')
          )
        )
        (norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
      )
      (atom_types_head): Linear(in_features=512, out_features=100, bias=True)
      (frac_coords_head): Linear(in_features=512, out_features=3, bias=False)
      (lattice_head): Linear(in_features=512, out_features=6, bias=False)
    )
    (quant_conv): Linear(in_features=512, out_features=16, bias=False)
    (post_quant_conv): Linear(in_features=8, out_features=512, bias=False)
  )
)

评论

请输入您的评论. 可以使用维基语法:
 
人工智能/材料科学/chemeleon2/chemeleon2-模型加载基本情况.txt · 最后更改: 2026/02/02 03:50