Skip to content

uosdmlab/spark-nkp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

51 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Natural Korean Processor for Apache Spark Build Status Maven Central

For English, please go to README.eng.md

์€์ „ํ•œ๋‹ข ํ”„๋กœ์ ํŠธ์˜ ํ˜•ํƒœ์†Œ ๋ถ„์„๊ธฐ seunjeon์„ Apache Spark์—์„œ ์‚ฌ์šฉํ•˜๊ธฐ ์‰ฝ๊ฒŒ ํฌ์žฅํ•œ ํŒจํ‚ค์ง€์ž…๋‹ˆ๋‹ค. spark-nkp๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‘ ๊ฐ€์ง€ Transformer๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค:

  • Tokenizer ๋ฌธ์žฅ์„ ํ˜•ํƒœ์†Œ ๋‹จ์œ„๋กœ ์ชผ๊ฐœ๋Š” transformer. ์›ํ•˜๋Š” ํ’ˆ์‚ฌ๋งŒ์„ ๊ฑธ๋Ÿฌ๋‚ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค.
  • Analyzer ํ˜•ํƒœ์†Œ ๋ถ„์„์„ ์œ„ํ•œ transformer. ๋ฌธ์žฅ์˜ ๋‹จ์–ด๋“ค์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ๋‹ด์€ DataFrame์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค.

๋˜ํ•œ, ์‚ฌ์šฉ์ž ์ •์˜ ์‚ฌ์ „์„ ์ง€์›ํ•˜๊ธฐ ์œ„ํ•œ Dictionary๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.

์‚ฌ์šฉ๋ฒ•

spark-shell

spark-shell --packages com.github.uosdmlab:spark-nkp_2.11:0.3.3

Zeppelin

๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค:

  • Interpreter Setting
  • Dynamic Dependency Loading (%spark.dep)

Interpreter Setting

Interpreter Setting > Spark Interpreter > Edit > Dependencies

artifact com.github.uosdmlab:spark-nkp_2.11:0.3.3

Dynamic Dependency Loading (%spark.dep)

%spark.dep
z.load("com.github.uosdmlab:spark-nkp_2.11:0.3.3")

์˜ˆ์ œ

Tokenizer

import com.github.uosdmlab.nkp.Tokenizer

val df = spark.createDataset(
	Seq(
		"์•„๋ฒ„์ง€๊ฐ€๋ฐฉ์—๋“ค์–ด๊ฐ€์‹ ๋‹ค.",
		"์‚ฌ๋ž‘ํ•ด์š” ์ œํ”Œ๋ฆฐ!",
		"์ŠคํŒŒํฌ๋Š” ์žฌ๋ฐŒ์–ด",
		"๋‚˜๋Š”์•ผ ๋ฐ์ดํ„ฐ๊ณผํ•™์ž",
		"๋ฐ์ดํ„ฐ์•ผ~ ๋†€์ž~"
	)
).toDF("text")

val tokenizer = new Tokenizer()
	.setInputCol("text")
	.setOutputCol("words")

val result = tokenizer.transform(df)

result.show(truncate = false)

output:

+------------+--------------------------+
|text        |words                     |
+------------+--------------------------+
|์•„๋ฒ„์ง€๊ฐ€๋ฐฉ์—๋“ค์–ด๊ฐ€์‹ ๋‹ค.|[์•„๋ฒ„์ง€, ๊ฐ€, ๋ฐฉ, ์—, ๋“ค์–ด๊ฐ€, ์‹ ๋‹ค, .]|
|์‚ฌ๋ž‘ํ•ด์š” ์ œํ”Œ๋ฆฐ!   |[์‚ฌ๋ž‘, ํ•ด์š”, ์ œํ”Œ๋ฆฐ, !]          |
|์ŠคํŒŒํฌ๋Š” ์žฌ๋ฐŒ์–ด    |[์ŠคํŒŒํฌ, ๋Š”, ์žฌ๋ฐŒ, ์–ด]           |
|๋‚˜๋Š”์•ผ ๋ฐ์ดํ„ฐ๊ณผํ•™์ž  |[๋‚˜, ๋Š”, ์•ผ, ๋ฐ์ดํ„ฐ, ๊ณผํ•™์ž]       |
|๋ฐ์ดํ„ฐ์•ผ~ ๋†€์ž~   |[๋ฐ์ดํ„ฐ, ์•ผ, ~, ๋†€์ž, ~]        |
+------------+--------------------------+

Analyzer

import org.apache.spark.sql.functions._
import com.github.uosdmlab.nkp.Analyzer

val df = spark.createDataset(
	Seq(
		"์•„๋ฒ„์ง€๊ฐ€๋ฐฉ์—๋“ค์–ด๊ฐ€์‹ ๋‹ค.",
		"์‚ฌ๋ž‘ํ•ด์š” ์ œํ”Œ๋ฆฐ!",
		"์ŠคํŒŒํฌ๋Š” ์žฌ๋ฐŒ์–ด",
		"๋‚˜๋Š”์•ผ ๋ฐ์ดํ„ฐ๊ณผํ•™์ž",
		"๋ฐ์ดํ„ฐ์•ผ~ ๋†€์ž~"
	)
).toDF("text")
	.withColumn("id", monotonically_increasing_id)

val analyzer = new Analyzer

val result = analyzer.transform(df)

result.show(truncate = false)

output:

+---+----+-------+-----------------------------------------------------+-----+---+
|id |word|pos    |feature                                              |start|end|
+---+----+-------+-----------------------------------------------------+-----+---+
|0  |์•„๋ฒ„์ง€ |[N]    |[NNG, *, F, ์•„๋ฒ„์ง€, *, *, *, *]                         |0    |3  |
|0  |๊ฐ€   |[J]    |[JKS, *, F, ๊ฐ€, *, *, *, *]                           |3    |4  |
|0  |๋ฐฉ   |[N]    |[NNG, *, T, ๋ฐฉ, *, *, *, *]                           |4    |5  |
|0  |์—   |[J]    |[JKB, *, F, ์—, *, *, *, *]                           |5    |6  |
|0  |๋“ค์–ด๊ฐ€ |[V]    |[VV, *, F, ๋“ค์–ด๊ฐ€, *, *, *, *]                          |6    |9  |
|0  |์‹ ๋‹ค  |[EP, E]|[EP+EF, *, F, ์‹ ๋‹ค, Inflect, EP, EF, ์‹œ/EP/*+แ†ซ๋‹ค/EF/*]   |9    |11 |
|0  |.   |[S]    |[SF, *, *, *, *, *, *, *]                            |11   |12 |
|1  |์‚ฌ๋ž‘  |[N]    |[NNG, *, T, ์‚ฌ๋ž‘, *, *, *, *]                          |0    |2  |
|1  |ํ•ด์š”  |[XS, E]|[XSV+EF, *, F, ํ•ด์š”, Inflect, XSV, EF, ํ•˜/XSV/*+์•„์š”/EF/*]|2    |4  |
|1  |์ œํ”Œ๋ฆฐ |[N]    |[NNP, *, T, ์ œํ”Œ๋ฆฐ, *, *, *, *]                         |5    |8  |
|1  |!   |[S]    |[SF, *, *, *, *, *, *, *]                            |8    |9  |
|2  |์ŠคํŒŒํฌ |[N]    |[NNG, *, F, ์ŠคํŒŒํฌ, *, *, *, *]                         |0    |3  |
|2  |๋Š”   |[J]    |[JX, *, T, ๋Š”, *, *, *, *]                            |3    |4  |
|2  |์žฌ๋ฐŒ  |[V]    |[VA, *, T, ์žฌ๋ฐŒ, *, *, *, *]                           |5    |7  |
|2  |์–ด   |[E]    |[EC, *, F, ์–ด, *, *, *, *]                            |7    |8  |
|3  |๋‚˜   |[N]    |[NP, *, F, ๋‚˜, *, *, *, *]                            |0    |1  |
|3  |๋Š”   |[J]    |[JX, *, T, ๋Š”, *, *, *, *]                            |1    |2  |
|3  |์•ผ   |[I]    |[IC, *, F, ์•ผ, *, *, *, *]                            |2    |3  |
|3  |๋ฐ์ดํ„ฐ |[N]    |[NNG, *, F, ๋ฐ์ดํ„ฐ, *, *, *, *]                         |4    |7  |
|3  |๊ณผํ•™์ž |[N]    |[NNG, *, F, ๊ณผํ•™์ž, Compound, *, *, ๊ณผํ•™/NNG/*+์ž/NNG/*]   |7    |10 |
+---+----+-------+-----------------------------------------------------+-----+---+
only showing top 20 rows

Dictionary

import com.github.uosdmlab.nkp.{Tokenizer, Dictionary}

val df = spark.createDataset(
	Seq(
		"๋•ํ›„๋ƒ„์ƒˆ๊ฐ€ ๋‚œ๋‹ค.",
		"๋„Œ ๋ˆˆ์น˜๋„ ์—†๋‹ˆ? ๋‚„๋ผ๋น ๋น !",
    "๋ฒ„์นด์ถฉํ–ˆ์–ด?",
    "C++"))
	.toDF("text")

val tokenizer = new Tokenizer()
	.setInputCol("text")
	.setOutputCol("words")

Dictionary.addWords("๋•ํ›„", "๋‚„๋ผ+๋น ๋น ,-100", "๋ฒ„์นด์ถฉ,-100", "C\\+\\+")

val result = tokenizer.transform(df)

result.show(truncate = false)

output:

+---------------+----------------------------+
|text           |words                       |
+---------------+----------------------------+
|๋•ํ›„๋ƒ„์ƒˆ๊ฐ€ ๋‚œ๋‹ค.      |[๋•ํ›„, ๋ƒ„์ƒˆ, ๊ฐ€, ๋‚œ๋‹ค, .]          |
|๋„Œ ๋ˆˆ์น˜๋„ ์—†๋‹ˆ? ๋‚„๋ผ๋น ๋น !|[๋„Œ, ๋ˆˆ์น˜, ๋„, ์—†, ๋‹ˆ, ?, ๋‚„๋ผ๋น ๋น , !]|
|๋ฒ„์นด์ถฉํ–ˆ์–ด?         |[๋ฒ„์นด์ถฉ, ํ–ˆ, ์–ด, ?]              |
|C++            |[C++]                       |
+---------------+----------------------------+

๋ช…์‚ฌ ๋‹จ์–ด TF-IDF with Pipeline

import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.feature.{CountVectorizer, IDF}
import com.github.uosdmlab.nkp.Tokenizer

val df = spark.createDataset(
	Seq(
		"์•„๋ฒ„์ง€๊ฐ€๋ฐฉ์—๋“ค์–ด๊ฐ€์‹ ๋‹ค.",
		"์‚ฌ๋ž‘ํ•ด์š” ์ œํ”Œ๋ฆฐ!",
		"์ŠคํŒŒํฌ๋Š” ์žฌ๋ฐŒ์–ด",
		"๋‚˜๋Š”์•ผ ๋ฐ์ดํ„ฐ๊ณผํ•™์ž",
		"๋ฐ์ดํ„ฐ์•ผ~ ๋†€์ž~"
	)
).toDF("text")

val tokenizer = new Tokenizer()
	.setInputCol("text")
	.setOutputCol("words")
	.setFilter("N")

val cntVec = new CountVectorizer()
  .setInputCol("words")
  .setOutputCol("tf")

val idf = new IDF()
  .setInputCol("tf")
  .setOutputCol("tfidf")

val pipe = new Pipeline()
  .setStages(Array(tokenizer, cntVec, idf))

val pipeModel = pipe.fit(df)

val result = pipeModel.transform(df)

result.show

output:

+------------+-------------+--------------------+--------------------+
|        text|        words|                  tf|               tfidf|
+------------+-------------+--------------------+--------------------+
|์•„๋ฒ„์ง€๊ฐ€๋ฐฉ์—๋“ค์–ด๊ฐ€์‹ ๋‹ค.|     [์•„๋ฒ„์ง€, ๋ฐฉ]| (9,[1,5],[1.0,1.0])|(9,[1,5],[1.09861...|
|   ์‚ฌ๋ž‘ํ•ด์š” ์ œํ”Œ๋ฆฐ!|    [์‚ฌ๋ž‘, ์ œํ”Œ๋ฆฐ]| (9,[3,8],[1.0,1.0])|(9,[3,8],[1.09861...|
|    ์ŠคํŒŒํฌ๋Š” ์žฌ๋ฐŒ์–ด|        [์ŠคํŒŒํฌ]|       (9,[6],[1.0])|(9,[6],[1.0986122...|
|  ๋‚˜๋Š”์•ผ ๋ฐ์ดํ„ฐ๊ณผํ•™์ž|[๋‚˜, ๋ฐ์ดํ„ฐ, ๊ณผํ•™์ž]|(9,[0,2,7],[1.0,1...|(9,[0,2,7],[0.693...|
|   ๋ฐ์ดํ„ฐ์•ผ~ ๋†€์ž~|    [๋ฐ์ดํ„ฐ, ๋†€์ž]| (9,[0,4],[1.0,1.0])|(9,[0,4],[0.69314...|
+------------+-------------+--------------------+--------------------+

API

Tokenizer

๋ฌธ์žฅ์„ ํ˜•ํƒœ์†Œ ๋‹จ์œ„๋กœ ์ชผ๊ฐœ๋Š” transformer ์ž…๋‹ˆ๋‹ค. setFilter ํ•จ์ˆ˜๋กœ ์›ํ•˜๋Š” ํ’ˆ์‚ฌ์— ํ•ด๋‹นํ•˜๋Š” ํ˜•ํƒœ์†Œ๋งŒ์„ ๊ฑธ๋Ÿฌ๋‚ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ํ’ˆ์‚ฌ ํƒœ๊ทธ๋Š” ์•„๋ž˜์˜ ํ’ˆ์‚ฌ ํƒœ๊ทธ ์„ค๋ช…์„ ์ฐธ๊ณ ํ•˜์„ธ์š”.

Example

import com.github.uosdmlab.nkp.Tokenizer

val tokenizer = new Tokenizer()
	.setInputCol("text")
	.setOutputCol("words")
	.setFilter("N", "V", "SN")	// ์ฒด์–ธ, ์šฉ์–ธ, ์ˆซ์ž๋งŒ์„ ์ถœ๋ ฅ

ํ’ˆ์‚ฌ ํƒœ๊ทธ ์„ค๋ช…

  • EP ์„ ์–ด๋ง์–ด๋ฏธ
  • E ์–ด๋ฏธ
  • I ๋…๋ฆฝ์–ธ
  • J ๊ด€๊ณ„์–ธ
  • M ์ˆ˜์‹์–ธ
  • N ์ฒด์–ธ (๋ช…์‚ฌ๊ฐ€ ์—ฌ๊ธฐ ์†ํ•ฉ๋‹ˆ๋‹ค)
  • S ๋ถ€ํ˜ธ
  • SL ์™ธ๊ตญ์–ด
  • SH ํ•œ์ž
  • SN ์ˆซ์ž
  • V ์šฉ์–ธ (๋™์‚ฌ๊ฐ€ ์—ฌ๊ธฐ ์†ํ•ฉ๋‹ˆ๋‹ค)
  • VCP ๊ธ์ •์ง€์ •์‚ฌ
  • XP ์ ‘๋‘์‚ฌ
  • XS ์ ‘๋ฏธ์‚ฌ
  • XR ์–ด๊ทผ

Members

  • transform(dataset: Dataset[_]): DataFrame

Parameter Setters

  • setFilter(pos: String, poses: String*): Tokenizer
  • setInputCol(value: String): Tokenizer
  • setOutputCol(value: String): Tokenizer

Parameter Getters

  • getFilter: Array[String]
  • getInputCol: String
  • getOutputCol: String

Analyzer

ํ˜•ํƒœ์†Œ ๋ถ„์„์„ ์œ„ํ•œ transformer ์ž…๋‹ˆ๋‹ค. ๋ถ„์„ํ•  ๋ฌธ์žฅ๋“ค๊ณผ ๊ฐ ๋ฌธ์žฅ๋“ค์„ ๊ตฌ๋ถ„ํ•  id๋ฅผ ์ž…๋ ฅ๊ฐ’์œผ๋กœ ๋ฐ›์Šต๋‹ˆ๋‹ค.

Example

import com.github.uosdmlab.nkp.Analyzer

val analyzer = new Analyzer

Analyzer DataFrame Schema

Input Schema

Input DataFrame์€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ column๋“ค์„ ๊ฐ€์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค. id column์˜ ๊ฐ’๋“ค์ด ๊ณ ์œ ํ•œ(unique) ๊ฐ’์ด ์•„๋‹ ๊ฒฝ์šฐ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. Unique ID๋Š” ์ƒ๋‹จ์˜ Analyzer ์˜ˆ์ œ์™€ ๊ฐ™์ด Spark์˜ SQL ํ•จ์ˆ˜ monotonically_increasing_id๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์‰ฝ๊ฒŒ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

์ด๋ฆ„ ์„ค๋ช…
id ๊ฐ text๋ฅผ ๊ตฌ๋ถ„ํ•  unique ID
text ๋ถ„์„ํ•  ํ…์ŠคํŠธ
Output Schema
์ด๋ฆ„ ์„ค๋ช…
id ๊ฐ text๋ฅผ ๊ตฌ๋ถ„ํ•  unique ID
word ๋‹จ์–ด
pos Part Of Speech; ํ’ˆ์‚ฌ
char characteristic; ํŠน์ง•, seunjeon์˜ feature
start ๋‹จ์–ด ์‹œ์ž‘ ์œ„์น˜
end ๋‹จ์–ด ์ข…๋ฃŒ ์œ„์น˜

์ž์„ธํ•œ ํ’ˆ์‚ฌ ํƒœ๊ทธ ์„ค๋ช…์€ seunjeon์˜ ํ’ˆ์‚ฌ ํƒœ๊ทธ ์„ค๋ช… ์Šคํ”„๋ ˆ๋“œ ์‹œํŠธ๋ฅผ ์ฐธ๊ณ ํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค.

Members

  • transform(dataset: Dataset[_]): DataFrame

Parameter Setters

  • setIdCol(value: String)
  • setTextCol(value: String)
  • setWordCol(value: String)
  • setPosCol(value: String)
  • setCharCol(value: String)
  • setStartCol(value: String)
  • setEndCol(value: String)

Parameter Getters

  • getIdCol(value: String)
  • getTextCol(value: String)
  • getWordCol(value: String)
  • getPosCol(value: String)
  • getCharCol(value: String)
  • getStartCol(value: String)
  • getEndCol(value: String)

Dictionary

์‚ฌ์šฉ์ž ์ •์˜ ์‚ฌ์ „์„ ๊ด€๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ object ์ž…๋‹ˆ๋‹ค. Dictionary์— ์ถ”๊ฐ€๋œ ๋‹จ์–ด๋“ค์€ Tokenizer์™€ Analyzer ๋ชจ๋‘์—๊ฒŒ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ •์˜ ๋‹จ์–ด๋Š” addWords ํ˜น์€ addWordsFromCSV ํ•จ์ˆ˜๋ฅผ ํ†ตํ•ด ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

Example

import com.github.uosdmlab.nkp.Dictionary

Dictionary
  .addWords("๋•ํ›„", "๋‚„๋ผ+๋น ๋น ,-100")
  .addWords(Seq("๋ฒ„์นด์ถฉ,-100", "C\\+\\+"))
  .addWordsFromCSV("path/to/CSV1", "path/to/CSV2")
  .addWordsFromCSV("path/to/*.csv")

Dictionary.reset()  // ์‚ฌ์šฉ์ž ์ •์˜ ์‚ฌ์ „ ์ดˆ๊ธฐํ™”

Members

  • addWords(word: String, words: String*): Dictionary
  • addWords(words: Traversable[String]): Dictionary
  • addWordsFromCSV(path: String, paths: String*): Dictionary
  • addWordsFromCSV(paths: Traversable[String]): Dictionary
  • reset(): Dictionary

CSV Example

addWordsFromCSV๋ฅผ ํ†ตํ•ด ์ „๋‹ฌ๋˜๋Š” CSV ํŒŒ์ผ์€ header๋Š” ์—†์–ด์•ผํ•˜๊ณ  word, cost ๋‘ ๊ฐœ์˜ ์ปฌ๋Ÿผ์„ ๊ฐ€์ ธ์•ผํ•ฉ๋‹ˆ๋‹ค. cost๋Š” ๋‹จ์–ด ์ถœ์—ฐ ๋น„์šฉ์œผ๋กœ ์ž‘์„์ˆ˜๋ก ์ถœ์—ฐํ•  ํ™•๋ฅ ์ด ๋†’์Œ์„ ๋œปํ•ฉ๋‹ˆ๋‹ค. cost๋Š” ์ƒ๋žต ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. CSV ํŒŒ์ผ์€ spark.read.csv๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ ๋•Œ๋ฌธ์— HDFS์— ์กด์žฌํ•˜๋Š” ํŒŒ์ผ ๋˜ํ•œ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜๋Š” CSV ํŒŒ์ผ์˜ ์˜ˆ์ž…๋‹ˆ๋‹ค:

๋•ํ›„
๋‚„๋ผ+๋น ๋น ,-100
๋ฒ„์นด์ถฉ,-100
C\+\+

+๋กœ ๋ณตํ•ฉ ๋ช…์‚ฌ๋ฅผ ๋“ฑ๋กํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. + ๋ฌธ์ž ์ž์ฒด๋ฅผ ์‚ฌ์ „์— ๋“ฑ๋กํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” \+๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”.

Test

sbt test

์•Œ๋ฆผ

๋ณธ ํŒจํ‚ค์ง€๋Š” Spark 2.0 ๋ฒ„์ „์„ ๊ธฐ์ค€์œผ๋กœ ๋งŒ๋“ค์–ด์กŒ์Šต๋‹ˆ๋‹ค.

๊ฐ์‚ฌ์˜ ๊ธ€

์€์ „ํ•œ๋‹ข ํ”„๋กœ์ ํŠธ์˜ ์œ ์˜ํ˜ธ๋‹˜, ์ด์šฉ์šด๋‹˜๊ป˜ ๊ฐ์‚ฌ์˜ ๋ง์”€ ๋“œ๋ฆฝ๋‹ˆ๋‹ค! ์—ฐ๊ตฌ์— ์ •๋ง ํฐ ๋„์›€์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.