In linguistics, the term morphology refers to the study of the internal structure of words. Each word is assumed to consist of one or more morphemes, which can be defined as the smallest linguistic unit having a particular meaning or grammatical function. One can come across morphologically simplex words, i.e. roots, as well as morphologically complex ones, such as compounds or affixed forms.
Batı-lı-laş-tır-ıl-ama-yan-lar-dan-mış-ız west-With-Make-Caus-Pass-Neg.Abil-Nom-Pl-Abl-Evid-A3Pl ‘It appears that we are among the ones that cannot be westernized.’
The morphemes that constitute a word combine in a (more or less) strict order. Most morphologically complex words are in the ”ROOT-SUFFIX1-SUFFIX2-...” structure. Affixes have two types: (i) derivational affixes, which change the meaning and sometimes also the grammatical category of the base they are attached to, and (ii) inflectional affixes serving particular grammatical functions. In general, derivational suffixes precede inflectional ones. The order of derivational suffixes is reflected on the meaning of the derived form. For instance, consider the combination of the noun göz ‘eye’ with two derivational suffixes -lIK and -CI: Even though the same three morphemes are used, the meaning of a word like gözcülük ‘scouting’ is clearly different from that of gözlükçü ‘optician’.
Here we present a new morphological analyzer, which is (i) open: The latest version of source codes, the lexicon, and the morphotactic rule engine are all available here, (ii) extendible: One of the disadvantages of other morphological analyzers is that their lexicons are fixed or unmodifiable, which prevents to add new bare-forms to the morphological analyzer. In our morphological analyzer, the lexicon is in text form and is easily modifiable, (iii) fast: Morphological analysis is one of the core components of any NLP process. It must be very fast to handle huge corpora. Compared to other morphological analyzers, our analyzer is capable of analyzing hundreds of thousands words per second, which makes it one of the fastest Turkish morphological analyzers available.
The morphological analyzer consists of five main components, namely, a lexicon, a finite state transducer, a rule engine for suffixation, a trie data structure, and a least recently used (LRU) cache.
In this analyzer, we assume all idiosyncratic information to be encoded in the lexicon. While phonologically conditioned allomorphy will be dealt with by the transducer, other types of allomorphy, all exceptional forms to otherwise regular processes, as well as words formed through derivation (except for the few transparently compositional derivational suffixes are considered to be included in the lexicon.
In our morphological analyzer, finite state transducer is encoded in an xml file.
To overcome the irregularities and also to accelerate the search for the bareforms, we use a trie data structure in our morphological analyzer, and store all words in our lexicon in that data structure. For the regular words, we only store that word in our trie, whereas for irregular words we store both the original form and some prefix of that word.
You can also see Python, Java, C++, C, Swift, Js, or C# repository.
To check if you have a compatible version of Python installed, use the following command:
python -V
You can find the latest version of Python here.
Install the latest version of Git.
pip3 install NlpToolkit-MorphologicalAnalysis-Cy
In order to work on code, create a fork from GitHub page. Use Git for cloning the code to your local or below line for Ubuntu:
git clone <your-fork-git-link>
A directory called DataStructure will be created. Or you can use below link for exploring the code:
git clone https://github.com/starlangsoftware/TurkishMorphologicalAnalysis-Cy.git
Steps for opening the cloned project:
- Start IDE
- Select File | Open from main menu
- Choose
MorphologicalAnalysis-Cyfile - Select open as project option
- Creating FsmMorphologicalAnalyzer
- Word level morphological analysis
- Sentence level morphological analysis
FsmMorphologicalAnalyzer provides Turkish morphological analysis. This class can be created as follows:
fsm = FsmMorphologicalAnalyzer()
This generates a new TxtDictionary type dictionary from turkish_dictionary.txt with fixed cache size 100000 and by using turkish_finite_state_machine.xml.
Creating a morphological analyzer with different cache size, dictionary or finite state machine is also possible.
-
With different cache size,
fsm = FsmMorphologicalAnalyzer(50000); -
Using a different dictionary,
fsm = FsmMorphologicalAnalyzer("my_turkish_dictionary.txt"); -
Specifying both finite state machine and dictionary,
fsm = FsmMorphologicalAnalyzer("fsm.xml", "my_turkish_dictionary.txt") ; -
Giving finite state machine and cache size with creating
TxtDictionaryobject,dictionary = TxtDictionary("my_turkish_dictionary.txt"); fsm = FsmMorphologicalAnalyzer("fsm.xml", dictionary, 50000) ; -
With different finite state machine and creating
TxtDictionaryobject,dictionary = TxtDictionary("my_turkish_dictionary.txt", "my_turkish_misspelled.txt"); fsm = FsmMorphologicalAnalyzer("fsm.xml", dictionary);
For morphological analysis, morphologicalAnalysis(String word) method of FsmMorphologicalAnalyzer is used. This returns FsmParseList object.
fsm = FsmMorphologicalAnalyzer()
word = "yarına"
fsmParseList = fsm.morphologicalAnalysis(word)
for i in range(fsmParseList.size()):
print(fsmParseList.getFsmParse(i).transitionList())
Output
yar+NOUN+A3SG+P2SG+DAT
yar+NOUN+A3SG+P3SG+DAT
yarı+NOUN+A3SG+P2SG+DAT
yarın+NOUN+A3SG+PNON+DAT
From FsmParseList, a single FsmParse can be obtained as follows:
parse = fsmParseList.getFsmParse(0)
print(parse.transitionList())
Output
yar+NOUN+A3SG+P2SG+DAT
morphologicalAnalysis(Sentence sentence) method of FsmMorphologicalAnalyzer is used. This returns FsmParseList[] object.
fsm = FsmMorphologicalAnalyzer()
sentence = Sentence("Yarın doktora gidecekler")
parseLists = fsm.morphologicalAnalysis(sentence)
for i in range(len(parseLists)):
for j in range(parseLists[i].size()):
parse = parseLists[i].getFsmParse(j)
print(parse.transitionList())
print("-----------------")
Output
-----------------
yar+NOUN+A3SG+P2SG+NOM
yar+NOUN+A3SG+PNON+GEN
yar+VERB+POS+IMP+A2PL
yarı+NOUN+A3SG+P2SG+NOM
yarın+NOUN+A3SG+PNON+NOM
-----------------
doktor+NOUN+A3SG+PNON+DAT
doktora+NOUN+A3SG+PNON+NOM
-----------------
git+VERB+POS+FUT+A3PL
git+VERB+POS^DB+NOUN+FUTPART+A3PL+PNON+NOM
@inproceedings{yildiz-etal-2019-open,
title = "An Open, Extendible, and Fast {T}urkish Morphological Analyzer",
author = {Y{\i}ld{\i}z, Olcay Taner and
Avar, Beg{\"u}m and
Ercan, G{\"o}khan},
booktitle = "Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)",
month = sep,
year = "2019",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd.",
url = "https://www.aclweb.org/anthology/R19-1156",
doi = "10.26615/978-954-452-056-4_156",
pages = "1364--1372",
}
- Do not forget to set package list. All subfolders should be added to the package list.
packages=['Classification', 'Classification.Model', 'Classification.Model.DecisionTree',
'Classification.Model.Ensemble', 'Classification.Model.NeuralNetwork',
'Classification.Model.NonParametric', 'Classification.Model.Parametric',
'Classification.Filter', 'Classification.DataSet', 'Classification.Instance', 'Classification.Attribute',
'Classification.Parameter', 'Classification.Experiment',
'Classification.Performance', 'Classification.InstanceList', 'Classification.DistanceMetric',
'Classification.StatisticalTest', 'Classification.FeatureSelection'],
- Package name should be lowercase and only may include _ character.
name='nlptoolkit_math',
- Package data should be defined and must ibclude pyx, pxd, c and py files.
package_data={'NGram': ['*.pxd', '*.pyx', '*.c', '*.py']},
- Setup should include ext_modules with compiler directives.
ext_modules=cythonize(["NGram/*.pyx"],
compiler_directives={'language_level': "3"}),
- Define the class variables and class methods in the pxd file.
cdef class DiscreteDistribution(dict):
cdef float __sum
cpdef addItem(self, str item)
cpdef removeItem(self, str item)
cpdef addDistribution(self, DiscreteDistribution distribution)
- For default values in class method declarations, use *.
cpdef list constructIdiomLiterals(self, FsmMorphologicalAnalyzer fsm, MorphologicalParse morphologicalParse1,
MetamorphicParse metaParse1, MorphologicalParse morphologicalParse2,
MetamorphicParse metaParse2, MorphologicalParse morphologicalParse3 = *,
MetamorphicParse metaParse3 = *)
- Define the class name as cdef, class methods as cpdef, and __init__ as def.
cdef class DiscreteDistribution(dict):
def __init__(self, **kwargs):
"""
A constructor of DiscreteDistribution class which calls its super class.
"""
super().__init__(**kwargs)
self.__sum = 0.0
cpdef addItem(self, str item):
- Do not forget to comment each function.
cpdef addItem(self, str item):
"""
The addItem method takes a String item as an input and if this map contains a mapping for the item it puts the
item with given value + 1, else it puts item with value of 1.
PARAMETERS
----------
item : string
String input.
"""
- Function names should follow caml case.
cpdef addItem(self, str item):
- Local variables should follow snake case.
det = 1.0
copy_of_matrix = copy.deepcopy(self)
- Variable types should be defined for function parameters, class variables.
cpdef double getValue(self, int rowNo, int colNo):
- Local variables should be defined with types.
cpdef sortDefinitions(self):
cdef int i, j
cdef str tmp
- For abstract methods, use ABC package and declare them with @abstractmethod.
@abstractmethod
def train(self, train_set: list[Tensor]):
pass
- For private methods, use __ as prefix in their names.
cpdef list __linearRegressionOnCountsOfCounts(self, list countsOfCounts)
- For private class variables, use __ as prefix in their names.
cdef class NGram:
cdef int __N
cdef double __lambda1, __lambda2
cdef bint __interpolated
cdef set __vocabulary
cdef list __probability_of_unseen
- Write __repr__ class methods as toString methods
- Write getter and setter class methods.
cpdef int getN(self)
cpdef setN(self, int N)
- If there are multiple constructors for a class, define them as constructor1, constructor2, ..., then from the original constructor call these methods.
cdef class NGram:
cpdef constructor1(self, int N, list corpus):
cpdef constructor2(self, str fileName):
def __init__(self,
NorFileName,
corpus=None):
if isinstance(NorFileName, int):
self.constructor1(NorFileName, corpus)
else:
self.constructor2(NorFileName)
- Extend test classes from unittest and use separate unit test methods.
class NGramTest(unittest.TestCase):
def test_GetCountSimple(self):
- For undefined types use object as type in the type declarations.
cdef class WordNet:
cdef object __syn_set_list
cdef object __literal_list
- For boolean types use bint as type in the type declarations.
cdef bint is_done
- Enumerated types should be used when necessary as enum classes, and should be declared in py files.
class AttributeType(Enum):
"""
Continuous Attribute
"""
CONTINUOUS = auto()
"""
- Resource files should be taken from pkg_recources package.
fileName = pkg_resources.resource_filename(__name__, 'data/turkish_wordnet.xml')



