Handling Cross- and Out-of-Domain Samples in Thai Word Segmentation (ACL 2021 Findings)
Stacked Ensemble Framework and DeepCut as Baseline model
- Paper: Handling Cross- and Out-of-Domain Samples in Thai Word Segmentation
- Related Work (EMNLP2020): Domain Adaptation of Thai Word Segmentation Models using Stacked Ensemble
- Blog: ตัดคำภาษาไทยและภาษาอื่นไปถึงไหนกันแล้ว?
@inproceedings{limkonchotiwat-etal-2021-handling,
title = "Handling Cross- and Out-of-Domain Samples in {T}hai Word Segmentation",
author = "Limkonchotiwat, Peerat and
Phatthiyaphaibun, Wannaphong and
Sarwar, Raheem and
Chuangsuwanich, Ekapol and
Nutanong, Sarana",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.86",
doi = "10.18653/v1/2021.findings-acl.86",
pages = "1003--1016",
}
pip install OSKut
- python >= 3.6
- tensorflow >= 2.0
- Example files are on OSKut Example notebook
- Try it on Colab
- ws, tnhc, and BEST !!
- ws: The model trained on Wisesight-1000 and test on Wisesight-160
- ws-augment-60p: The model trained on Wisesight-1000 augmented with top-60% entropy
- tnhc: The model trained on TNHC (80:20 train&test split with random seed 42)
- BEST: The model trained on BEST-2010 Corpus (NECTEC)
- SCADS: The model trained on VISTEC-TP-TH-2021 Corpus (VISTEC)
oskut.load_model(engine='ws') # OR oskut.load_model(engine='ws-augment-60p') # OR oskut.load_model(engine='tnhc') # OR oskut.load_model(engine='best') # OR oskut.load_model(engine='scads') # OR
- tl-deepcut-XXXX
- We also provide transfer learning of deepcut on 'Wisesight' as tl-deepcut-ws, 'TNHC' as tl-deepcut-tnhc, and 'LST20' as tl-deepcut-lst20
oskut.load_model(engine='tl-deepcut-ws') # OR oskut.load_model(engine='tl-deepcut-tnhc')
- deepcut
- We also provide the original deepcut
oskut.load_model(engine='deepcut')
You need to read the paper to understand why we have
- Tokenize with default k-value
oskut.load_model(engine='ws') print(oskut.OSKut(['เบียร์ยูไม่อร่อยสัดๆๆๆๆๆฟๆ'])) print(oskut.OSKut('เบียร์ยูไม่อร่อยสัดๆๆๆๆๆฟๆ')) ['เบียร์', 'ยู', 'ไม่', 'อร่อย', 'สัด', 'ๆ', 'ๆ', 'ๆ', 'ๆ', 'ๆฟ', 'ๆ'] ['เบียร์', 'ยู', 'ไม่', 'อร่อย', 'สัด', 'ๆ', 'ๆ', 'ๆ', 'ๆ', 'ๆฟ', 'ๆ']
- Tokenize with a various k-value
oskut.load_model(engine='ws') print(oskut.OSKut('เบียร์ยูไม่อร่อยสัดๆๆๆๆๆฟๆ',k=5)) # refine only 5% of character number print(oskut.OSKut('เบียร์ยูไม่อร่อยสัดๆๆๆๆๆฟๆ',k=100)) # refine 100% of character number ['เบียร์', 'ยู', 'ไม่', 'อร่อย', 'สัด', 'ๆ', 'ๆ', 'ๆ', 'ๆ', 'ๆฟๆ'] ['เบียร์', 'ยู', 'ไม่', 'อร่อย', 'สัด', 'ๆ', 'ๆ', 'ๆ', 'ๆ', 'ๆฟ', 'ๆ']
VISTEC-TP-TH-2021 (VISTEC), which consists of 49,997 text samples from Twitter (2017-2019).
VISTEC corpus contains 49,997 sentences with 3.39M words where the collection was manually annotated by linguists on four tasks, namely word segmentation, misspelling detection and correction, and named entity recognition.
For more information and download click here
Thank you many code from
- Deepcut (Baseline Model) : We used some of code from Deepcut to perform transfer learning