
阿升
2023/04/11阅读:40主题:默认主题
ChatGPT大模型全栈技术讲解:霍普金斯2023《NLP:自监督模型》
大型自监督(预训练)模型已经改变了各种数据驱动的领域,如自然语言处理(NLP)。在本课程中,学生将全面介绍NLP应用的自监督学习技术。通过讲座、作业和期末项目,学生将学习使用Pytorch框架设计、实现和理解自己的自监督神经网络模型的必要技能。
课程内容如下所示:
1.课程导论 Course overview Plan and expectations
2.词义与表示 Word meaning and representation
3.Word2vec目标函数(续),检查和评估词向量,Word2vec objective function (continued), inspecting and evaluating word vectors
4.Word2vec的局限性和上下文建模,前馈网络,神经网络简史,Word2vec作为简单的前馈网络;Word2vec limitations and modeling context,feedforward networks,Neural nets: brief history,Word2vec as simple feedforward net
5.解析式反向传播,自动微分,训练神经网络的实用技巧,Analytical backpropagation,Automatic differentiation,Practical tips for training neural networks
6.语言建模,N-gram模型,语言模型评估,Language modeling, N-gram models, evaluating LMs
7.测量LM质量,固定窗口语言建模与FFNs,Measuring LM quality Fixed-window language modeling with FFNs
8.文本生成算法,循环神经网络,编码器-解码器模型,Text generation algorithms Recurrent neural networks,Encoder-decoder models
9.RNN continued: ELMo,Language units and subwords,RNN继续:ELMo,语言单位和子词
10.Self-attention Transformer 自注意力Transformer
11.编码器族 Encoder family (BERT, RoBERTa, ...)
12.编码器解码器族 Encoder-Decoder family (T5, BART), 解码器族 Decoder family (GPTk)
13.解码器族,Decoder family (GPTk), In-context learning
14.上下文学习,带提示的自适应模型(提示工程),上下文学习的失败模式,In-context learning,Adapting models with prompting (prompt engineering),Failure modes of in-context learning
15.基于提示的多步推理,参数变化自适应模型(头调优,提示调优,适配器),Multi-step reasoning via prompts,Adapting models with parameter change (head-tuning, prompt-tuning, adaptors)
16. 缩放律,长上下文自注意力修改,检索增强语言模型,Scaling laws,Modifying self-attention for long context,Retrieval augmented language models
17.社会对语言模型的关注:偏见、公平和有毒语言、幻觉、真实性和真实性,Social concerns about LMs: Bias, fairness and toxic language,Hallucination, truthfulness, and veracity
18.Alignment via language instructions: existing solutions and challenges,通过语言指令进行对齐:现有的解决方案和挑战
19.视觉语言模型 Vision-language models
参考文献:
[1]CS 601.471671 NLP:Self-supervised Models_up: https://url39.ctfile.com/d/2501739-55487961-bf9650?p=2096 (访问密码: 2096)
[2]CS 601.471/671 NLP: Self-supervised Models:https://self-supervised.cs.jhu.edu/sp2023/index.html
作者介绍

阿升
吾爱DotNet(公众号)