Skip to content

Latest commit

 

History

History
273 lines (257 loc) · 34.8 KB

README.md

File metadata and controls

273 lines (257 loc) · 34.8 KB

text-analytics

Unstructured Data Analysis (Graduate) @Korea University

Notice

  • Syllabus (download)
  • Term Project Presentations and Youtube summary (2021 Spring) (link)
    • Term Project Presentations and Youtube summary (2020 Spring) (link)
  • Term Project 제안발표
    • Instruction [download]
    • Deadline: 2021-03-31 23:59
  • Term Project 중간발표
    • Instruction [download]
    • Deadline: 2021-05-16 23:59
  • Term Project 최종발표
    • Instruction [download]
    • Deadline: 2021-06-20 23:59

Schedule

  • 2021-03-09 (Tue): Term Project Meeting (Round 1)
    • 1조: 강형원, 김지나, 김탁영, 김수빈
    • 2조: 소규성, 이윤승, 정의석
    • 3조: 김정섭, 허재혁, 최정우, 윤훈상
    • 4조: 정은영, 오혜성 조민영
  • 2021-03-11 (Thu): Term Project Meeting (Round 1)
    • 5조: 김동균, 이찬호, 차형주
    • 6조: 김형주, 천주영, 신동환
    • 7조: 허종국, 임새린, 고은지, 황석철
    • 8조: 김아름, 최병록, 임민아, 임수연
  • 2021-03-16 (Tue): Topic Discussion and QA (Topic 1, 2)
    • 동영상 시청 완료 및 질문 등록: 2021-03-09
    • 질문에 대한 수강생 답변 등록: 2021-03-14
    • 유튜브 요약 영상 업로드: 2021-03-18
  • 2021-03-18 (Thu): No class
  • 2021-03-23 (Tue): Term Project Meeting (Round 1)
    • 9조: 유이경, 고은성, 조경선, 김지은
    • 10조: 김태연, 안시후, 오혜령, 조한샘
    • 11조: 안인범, 김정원, 황성진, 김상민
    • 12조: 정회찬, 신우석, 양석우, 노상균, 김진섭, 김영섭
  • 2021-03-25 (Thu): No class
  • 2021-03-30 (Tue): Topic Discussion and QA (Topic 4, 5)
    • 동영상 시청 완료 및 질문 등록: 2021-03-23
    • 질문에 대한 수강생 답변 등록: 2021-03-28
    • 유튜브 요약 영상 업로드: 2021-04-01
  • 2021-04-01 (Thu): Term Project Meeting (Round 2)
    • 1조, 2조, 3조
  • 2021-04-06 (Tue): Topic Discussion and QA (Topic 6)
    • 동영상 시청 완료 및 질문 등록: 2021-03-30
    • 질문에 대한 수강생 답변 등록: 2021-04-04
    • 유튜브 요약 영상 업로드: 2021-04-08
    • Term Project Meeting (Round 2)
      • 4조, 5조, 6조
  • 2021-04-13 (Tue): Topic Discussion and QA (Topic 7)
    • 동영상 시청 완료 및 질문 등록: 2021-04-06
    • 질문에 대한 수강생 답변 등록: 2021-04-11
    • 유튜브 요약 영상 업로드: 2021-04-15
  • 2021-04-15 (Thu): Term Project Meeting (Round 2)
    • 7조, 8조, 9조
  • 2021-04-20 (Tue): Topic Discussion and QA (Topic 8 - Seq2Seq Learning & Transformer)
    • 동영상 시청 완료 및 질문 등록: 2021-04-13
    • 질문에 대한 수강생 답변 등록: 2021-04-18
    • 유튜브 요약 영상 업로드: 2021-04-22
  • 2021-04-22 (Thu): Term Project Meeting (Round 2)
    • 10조, 11조, 12조
  • 2021-04-27 (Tue): Topic Discussion and QA (Topic 8 - ELMo, GPT, BERT)
    • 동영상 시청 완료 및 질문 등록: 2021-04-20
    • 질문에 대한 수강생 답변 등록: 2021-04-25
    • 유튜브 요약 영상 업로드: 2021-04-29
  • 2021-04-29 (Thu): Term Project Meeting (Round 3)
    • 1조, 2조, 3조, 4조
  • 2021-05-04 (Tue): Topic Discussion and QA (Topic 8 - GPT-2, Transformer to T5, GPT-3)
    • 동영상 시청 완료 및 질문 등록: 2021-04-27
    • 질문에 대한 수강생 답변 등록: 2021-05-01
    • 유튜브 요약 영상 업로드: 2021-05-06
    • Term Project Meeting (Round 3)
      • 5조, 6조, 7조, 8조
  • 2021-05-06 (Thu): Term Project Meeting (Round 3)
    • 9조, 10조, 11조, 12조
  • 2021-05-11 (Tue): Topic Discussion and QA (Topic 9 - Document Classification)
    • 동영상 시청 완료 및 질문 등록: 2021-05-04
    • 질문에 대한 수강생 답변 등록: 2021-05-09
    • 유튜브 요약 영상 업로드: 2021-05-13
  • 2021-05-13 (Thu): Term Project 중간발표 준비
  • 2021-05-18 (Tue): Topic Discussion and QA (Topic 10 - Sentiment Analysis)
    • 동영상 시청 완료 및 질문 등록: 2021-05-11
    • 질문에 대한 수강생 답변 등록: 2021-05-16
    • 유튜브 요약 영상 업로드: 2021-05-20
  • 2021-05-20 (Thu): Term Project Meeting (Round 4)
    • 1조, 2조, 3조, 4조
  • 2021-05-25 (Tue): Topic Discussion and QA (Topic 11 - Text Summairzation (Extractive Summarization))
    • 동영상 시청 완료 및 질문 등록: 2021-05-18
    • 질문에 대한 수강생 답변 등록: 2021-05-23
    • 유튜브 요약 영상 업로드: 2021-05-27
  • 2021-05-27 (Thu): Term Project Meeting (Round 4)
    • 5조, 6조, 7조, 8조
  • 2021-06-01 (Tue) & 2021-06-03 (Thu): 산업공학회 춘계공동학술대회
  • 2021-06-08 (Tue): Topic Discussion and QA (Topic 11 - Text Summairzation (Abstractive Summarization))
    • 동영상 시청 완료 및 질문 등록: 2021-06-01
    • 질문에 대한 수강생 답변 등록: 2021-06-06
    • 유튜브 요약 영상 업로드: 2021-06-10
  • 2021-06-10 (Thu): Term Project Meeting (Round 4)
    • 9조, 10조, 11조, 12조
  • 2021-06-17 (Thu): Term Project 최종발표

Recommended courses

Contents

Topic 1: Introduction to Text Analytics

  • Text Analytics: Backgrounds, Applications, & Challanges, and Process [Slide], [Video] (2021-03-09)
  • Text Analytics Process [Slide], [Video] (2021-03-09)
  • Q & A Session [Slide]
  • Reading materials

Topic 2: Text Preprocessing

  • Introduction to Natural Language Processing (NLP) [Slide], [Video] (2021-03-09)
  • Lexical analysis [Slide], [Video] (2021-03-09)
  • Syntax analysis & Other topics in NLP [Slide], [Video] (2021-03-09)
  • Reading materials
    • Cambria, E., & White, B. (2014). Jumping NLP curves: A review of natural language processing research. IEEE Computational intelligence magazine, 9(2), 48-57. (PDF)
    • Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., & Kuksa, P. (2011). Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug), 2493-2537. (PDF)
    • Young, T., Hazarika, D., Poria, S., & Cambria, E. (2017). Recent trends in deep learning based natural language processing. arXiv preprint arXiv:1708.02709. (PDF)

Topic 3: Neural Networks Basics (Optional, No Video Lectures)

  • Perception, Multi-layered Perceptron
  • Convolutional Neural Networks (CNN)
  • Recurrent Neural Networks (RNN)
  • Practical Techniques

Topic 4: Text Representation I: Classic Methods

  • Bag of words, Word weighting, N-grams [Slide], [Video] (2021-03-09)
  • Q & A Session [Slide]

Topic 5: Text Representation II: Distributed Representation

  • Neural Network Language Model (NNLM) [Slide], [Video] (2021-03-23)
  • Word2Vec [Slide], [Video], [Optional Video (발표자: 김지나)] (2021-03-23)
  • GloVe [Slide], [Video], [Optional Video (발표자 조규원)] (2021-03-23)
  • FastText, Doc2Vec, and Other Embeddings [Slide], [Video] [Optional Video (발표자: 김수빈] (2021-03-23)
  • Q & A Session [Slide]
  • (Optional) Analogies Explained: Towards Understanding Word Embeddings [Video (발표자: 김명섭)]
  • Reading materials
    • Bengio, Y., Ducharme, R., Vincent, P., & Jauvin, C. (2003). A neural probabilistic language model. Journal of machine learning research, 3(Feb), 1137-1155. (PDF)
    • Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. (PDF)
    • Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111-3119). (PDF)
    • Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543). (PDF)
    • Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2016). Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. (PDF)

Topic 6: Dimensionality Reduction

  • Dimensionality Reduction Overview, Supervised Feature Selection [Slide], [Video] (2021-03-30)
  • Unsupervised Feature Extraction [Slide], [Video] (2021-03-30)
  • Reading materials
    • Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American society for information science, 41(6), 391-407. (PDF)
    • Landauer, T. K., Foltz, P. W., & Laham, D. (1998). An introduction to latent semantic analysis. Discourse processes, 25(2-3), 259-284. (PDF)
    • Maaten, L. V. D., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of machine learning research, 9(Nov), 2579-2605. (PDF) (Homepage)

Topic 7: Topic Modeling as a Distributed Reprentation

  • Topic modeling overview & Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis: pLSA [Slide], [Video] (2021-04-06)
  • LDA: Document Generation Process [Slide], [Video] (2021-04-06)
  • LDA Inference: Collapsed Gibbs Sampling, LDA Evaluation [Slide], [Video] (2021-04-06)
  • Q & A Session [Slide]
  • Reading Materials
    • Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American society for information science, 41(6), 391. (PDF)
    • Dumais, S. T. (2004). Latent semantic analysis. Annual review of information science and technology, 38(1), 188-230.
    • Hofmann, T. (1999, July). Probabilistic latent semantic analysis. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence (pp. 289-296). Morgan Kaufmann Publishers Inc. (PDF)
    • Hofmann, T. (2017, August). Probabilistic latent semantic indexing. In ACM SIGIR Forum (Vol. 51, No. 2, pp. 211-218). ACM.
    • Blei, D. M. (2012). Probabilistic topic models. Communications of the ACM, 55(4), 77-84. (PDF)
    • Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of machine Learning research, 3(Jan), 993-1022. (PDF) [Optional Video (발표자: 윤훈상)]
  • Recommended video lectures

Topic 8: Language Modeling & Pre-trained Models

  • Sequence-to-Sequence Learning [Slide], [Video] (2021-04-13)
  • Transformer [Slide], [Video], [Optional Video (발표자: 김동화)], [Optional Video (발표자: 소규성)] (2021-04-13)
  • ELMo: Embeddings from Language Models [Slide], [Video] (2021-04-20)
  • GPT: Generative Pre-Training of a Language Model [Slide], [Video] (2021-04-20)
  • BERT: Bidirectional Encoder Representations from Transformer [Slide], [Video] (2021-04-20)
  • GPT-2: Language Models are Unsupervised Multitask Learners [Slide], [Video] (2021-04-27)
  • Transformer to T5 (XLNet, RoBERTa, MASS, BART, MT-DNN,T5) [Video (발표자: 이유경)] (2021-04-27)
  • GPT-3: Language Models are Few Shot Learners [Slide], [Video] (2021-04-27)
  • (Optional) How Contextual are Contextualized Word Representations? Comparing the Geoetry of BERT, ELMo, and GPT-2 Embeddings [Slide], [Video (발표자: 이유경)]
  • (Optional) Syntax and Semantics in Language Model Representation [Video (발표자: 김명섭)]
  • Reading Materials
    • Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems (pp. 3104-3112). (PDF)
    • Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. (PDF)
    • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008). (PDF)
    • Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. arXiv preprint arXiv:1802.05365. (PDF)
    • Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. (PDF)
    • Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. (PDF)
    • Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 9. (PDF)
    • Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., & Le, Q. V. (2019). XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. (PDF)
    • Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. (PDF)
    • Song, K., Tan, X., Qin, T., Lu, J., & Liu, T. Y. (2019). Mass: Masked sequence to sequence pre-training for language generation. arXiv preprint arXiv:1905.02450. (PDF)
    • Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., ... & Zettlemoyer, L. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. (PDF)
    • Liu, X., He, P., Chen, W., & Gao, J. (2019). Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. (PDF)
    • Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2019). Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. (PDF)
    • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165. (PDF)

Topic 9: Document Classification

  • Document classification overview, Vector Space Models (Naive Bayesian Classifier, k-Nearese Neighbor Classifier) [Slide], [Video] (2021-05-04)
  • (Optional) Other VSM-based classsification (Lecture videos are taken from IMEN415 (Multivariate Data Analysis for Undergraudate Students @Korea University))
  • CNN-based document classification [Slide], [Video], [(Optional) 발표자 이기창] (2021-05-04)
  • RNN-based document classification [Slide], [Video] (2021-05-04)
  • Reading materials
    • Kim, Y. (2014). Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. (PDF)
    • Zhang, X., Zhao, J., & LeCun, Y. (2015). Character-level convolutional networks for text classification. In Advances in neural information processing systems (pp. 649-657) (PDF)
    • Lee, G., Jeong, J., Seo, S., Kim, C, & Kang, P. (2018). Sentiment classification with word localization based on weakly supervised learning with a convolutional neural network. Knowledge-Based Systems, 152, 70-82. (PDF)
    • Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., & Hovy, E. (2016). Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 1480-1489). (PDF)
    • Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. (PDF)
    • Luong, M. T., Pham, H., & Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025. (PDF)

Topic 10: Sentiment Analysis

  • Architecture of sentiment analysis [Slide], [Video] (2021-05-11)
  • Lexicon-based approach [Slide], [Video] (2021-05-11)
  • Machine learning-based approach [Slide], [Video] (2021-05-11)
  • Reading materials
    • Hamilton, W. L., Clark, K., Leskovec, J., & Jurafsky, D. (2016, November). Inducing domain-specific sentiment lexicons from unlabeled corpora. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing (Vol. 2016, p. 595). NIH Public Access. (PDF)
    • Zhang, L., Wang, S., & Liu, B. (2018). Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4), e1253. (PDF)

Topic 11: Text Summarization

  • Text summarization Overview [Slide], [Video] (2021-05-18)
  • Extractive Summarization 1: Graph-based Summarization (TextRank, LexRank, DivRank) [Slide], [Video] (2021-05-18)
  • Extractive Summarization 2: Neural Network-based Summarization (SummaRuNNer, NeuSum) [Slide], [Video] (2021-05-18)
  • Abstractive Summarization [Slide], [Video] (2021-06-01)
  • Extractive/Abstractive Summarization (2021-06-01)
  • Reading materials
    • Lin, C. Y. (2004, July). Rouge: A package for automatic evaluation of summaries. In Text summarization branches out (pp. 74-81). (PDF)
    • Ganesan, K. (2018). Rouge 2.0: Updated and improved measures for evaluation of summarization tasks. arXiv preprint arXiv:1803.01937. (PDF)
    • Mihalcea, R., & Tarau, P. (2004, July). Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing (pp. 404-411). (PDF)
    • Erkan, G., & Radev, D. R. (2004). Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22, 457-479. (PDF)
    • Mei, Q., Guo, J., & Radev, D. (2010, July). Divrank: the interplay of prestige and diversity in information networks. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1009-1018). (PDF)
    • Nallapati, R., Zhai, F., & Zhou, B. (2017, February). Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 31, No. 1). (PDF)
    • Zhou, Q., Yang, N., Wei, F., Huang, S., Zhou, M., & Zhao, T. (2018). Neural document summarization by jointly learning to score and select sentences. arXiv preprint arXiv:1807.02305. (PDF)
    • Nallapati, R., Zhou, B., Gulcehre, C., & Xiang, B. (2016). Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. (PDF)
    • See, A., Liu, P. J., & Manning, C. D. (2017). Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. (PDF)

Topic 12: Question and Answering (Optional)

  • Question and Answering (CS224N) [Video (발표자: 조규원)]
  • SQuAD: 100,000+ questions for machine comprehension of text [Video (발표자: 김형석)]
  • Know what you don't know: Unanswerable questions for SQuAD [Video (발표자: 김형석)]
  • End-To-End Memory Networks [Video (발표자: 허재혁)]
  • Ask Me Anything: Dynamic Memory Networks for Natural Language Processing [Video (발표자: 김지나)]
  • (Optional) A Regurrent BERT-based Model for Question Generation & Learning to Answer by Learning to Ask: Getting the Best of GPT-2 and BERT Worlds [Video (발표자: 조규원)]
  • (Optional) VQA: Visual Question Answering [Video (발표자: 이윤승)]
  • Reading Materials
    • Rajpurkar, P., Zhang, J., Lopyrev, K., & Liang, P. (2016). Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. (PDF)
    • Rajpurkar, P., Jia, R., & Liang, P. (2018). Know what you don't know: Unanswerable questions for SQuAD. arXiv preprint arXiv:1806.03822. (PDF)
    • Sukhbaatar, S., Szlam, A., Weston, J., & Fergus, R. (2015). End-to-end memory networks. arXiv preprint arXiv:1503.08895. (PDF)
    • Kumar, A., Irsoy, O., Ondruska, P., Iyyer, M., Bradbury, J., Gulrajani, I., ... & Socher, R. (2016, June). Ask me anything: Dynamic memory networks for natural language processing. In International conference on machine learning (pp. 1378-1387). PMLR. (PDF)

Topic 13: (Open) Information Extraction (Optional)