Web Analytics Made Easy - Statcounter

생각나는대로

유레카 그리고 아주 러프한 메모

sleepy_wug 2025. 7. 30. 11:44

포스팅이 뜸한 듯하여 급하고 러프하게 한 메모를 그냥 올립니다. 당연하지만, 여기서 수정을 더 할 것임.

 

끝판왕:

 

Modern co-phonology is...

p(input ~ output) = p(phonotactic_i) * p(process_x) + p(phonotactic_j) * p(process_y)

 

phonotactic_i and phonotactic_j are phonotactic gatekeeper grammar

process_x and process_y are co-phonology (phonology proper)

 

 

P(LT) = P(GK) * PP

where,

GK: Gatekepper grammar

PP: Phonology proper

 

When applying this as a model of L-Tensification...

e.g., categorically LT-undergoing word: P(GK_lt) = 1, PP_lt <- [COR, -son] -> [+c.g.] / [lateral]_$ (rule for sketch)

 

GK is either GK_e GK_p, where GK_e is etymologically determined and GK_p is phonotactically determined.

 

specifically,

 

GK_e <- sino-Korean generalization (regardless of LT examples in the exisitng data)

GK_p <- phonotactics of LT-undergoing examples.

 

conventionally (i.e., up to 2020), GK is a maxent grammar.

→ questioning conventional GK: can be a neural model, e.g., a transformer? 

 

PP: either categorical or gradient constraints (i will not argue, will follow previous studies)

 

Likewise in Maltese, 

GK_e <- Romance-Maltese vs Semitic-Maltese

GK_p <- phonotactics of non-concatenative examples

PP: non-concatenative grammar