zinc250k / README.md
yairschiff's picture
Update README.md
8726132 verified
---
dataset_info:
features:
- name: smiles
dtype: string
- name: logP
dtype: float64
- name: qed
dtype: float64
- name: SAS
dtype: float64
- name: canonical_smiles
dtype: string
- name: single_bond
dtype: int64
- name: double_bond
dtype: int64
- name: triple_bond
dtype: int64
- name: aromatic_bond
dtype: int64
- name: ring_count
dtype: int64
- name: R3
dtype: int64
- name: R4
dtype: int64
- name: R5
dtype: int64
- name: R6
dtype: int64
- name: R7
dtype: int64
- name: R8
dtype: int64
- name: R9
dtype: int64
- name: R10
dtype: int64
- name: R12
dtype: int64
- name: R13
dtype: int64
- name: R14
dtype: int64
- name: R15
dtype: int64
- name: R18
dtype: int64
- name: R24
dtype: int64
splits:
- name: train
num_bytes: 61223067
num_examples: 224568
- name: validation
num_bytes: 6784626
num_examples: 24887
download_size: 22056296
dataset_size: 68007693
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "ZINC250k"
ZINC250k from [Irwin et al., 2005](https://pubmed.ncbi.nlm.nih.gov/15667143/); taken from [Jo et al., 2022](https://arxiv.org/abs/2202.02514).
Data downloaded from: https://github.com/harryjo97/GDSS.
Additional annotations (bond and ring counts) added using [`rdkit`](https://www.rdkit.org/docs/index.html) library.
## Quick start usage:
```python
from datasets import load_dataset
ds = load_dataset("yairschiff/zinc250k")
# Use `ds['train']['canonical_smiles']` from `rdkit` as inputs.
```
## Full processing steps
```python
import json
import re
import typing
import datasets
import pandas as pd
import rdkit
from rdkit import Chem as rdChem
from tqdm.auto import tqdm
# TODO: Update to 2024.03.6 release when available instead of suppressing warning!
# See: https://github.com/rdkit/rdkit/issues/7625#
rdkit.rdBase.DisableLog('rdApp.warning')
def count_rings_and_bonds(
mol: rdChem.Mol
) -> typing.Dict[str, int]:
"""Counts bond and ring (by type)."""
# Counting rings
ssr = rdChem.GetSymmSSSR(mol)
ring_count = len(ssr)
ring_sizes = {}
for ring in ssr:
ring_size = len(ring)
if ring_size not in ring_sizes:
ring_sizes[ring_size] = 0
ring_sizes[ring_size] += 1
# Counting bond types
bond_counts = {
'single': 0,
'double': 0,
'triple': 0,
'aromatic': 0
}
for bond in mol.GetBonds():
if bond.GetIsAromatic():
bond_counts['aromatic'] += 1
elif bond.GetBondType() == rdChem.BondType.SINGLE:
bond_counts['single'] += 1
elif bond.GetBondType() == rdChem.BondType.DOUBLE:
bond_counts['double'] += 1
elif bond.GetBondType() == rdChem.BondType.TRIPLE:
bond_counts['triple'] += 1
result = {
'ring_count': ring_count,
}
for k, v in ring_sizes.items():
result[f"R{k}"] = v
for k, v in bond_counts.items():
result[f"{k}_bond"] = v
return result
"""
Download data and validation indices from:
"Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations"
https://github.com/harryjo97/GDSS
> wget wget https://raw.githubusercontent.com/harryjo97/GDSS/master/data/zinc250k.csv
> wget https://raw.githubusercontent.com/harryjo97/GDSS/master/data/valid_idx_zinc250k.json
"""
df = pd.read_csv('<PATH TO zinc250k.csv>', index_col=0, encoding='utf_8')
feats = []
for i, row in tqdm(df.iterrows(), total=len(df), desc='RDKit feats', leave=False):
feat = {'smiles': row['smiles']}
feat['canonical_smiles'] = rdChem.CanonSmiles(feat['smiles'])
m = rdChem.MolFromSmiles(feat['canonical_smiles'])
feat.update(count_rings_and_bonds(m))
feats.append(feat)
df = pd.merge(df, pd.DataFrame.from_records(feats), on='smiles')
df = df.fillna(0)
for col in df.columns: # recast ring counts as int
if re.search("^R[0-9]+$", col) is not None:
df[col] = df[col].astype(int)
# Re-order columns
df = df[
['smiles', 'logP', 'qed', 'SAS', 'canonical_smiles',
'single_bond', 'double_bond', 'triple_bond', 'aromatic_bond',
'ring_count','R3', 'R4', 'R5', 'R6', 'R7', 'R8', 'R9', 'R10', 'R12', 'R13', 'R14', 'R15', 'R18', 'R24']]
# Read in validation indices
with open('<PATH TO valid_idx_zinc250k.json>', 'r') as f:
valid_idxs = json.load(f)
df['validation'] = df.index.isin(valid_idxs).astype(int)
# Create HF dataset
dataset = datasets.DatasetDict({
'train': datasets.Dataset.from_pandas(df[df['validation'] == 0].drop(columns=['validation'])),
'validation': datasets.Dataset.from_pandas(df[df['validation'] == 1].drop(columns=['validation'])),
})
dataset = dataset.remove_columns('__index_level_0__')
```