Introducing artificial intelligence production services on the Databricks platform
We will be exploring the topic of Generative AI, specifically Large Language Models (LLMs), and how they can be applied to real-world problems. Our focus will be on natural language processing (NLP) using popular libraries such as Hugging Face transformers and LangChain. I will guide you through the intricacies of pre-training, fine-tuning, and prompt engineering, and demonstrate how this knowledge can be utilized to create a custom chat model using the RAG approach. Additionally, we will delve into methods for evaluating the efficiency and bias of LLMs.
Topics of discussion will include common NLP tasks, prompt engineering, Retrieval Augmented Generation (RAG) with a general approach, Vector Library vs Vector Database, Multi-stage Reasoning with LLMs, LangChain, ReAct paradigm, Model Fine-tuning including techniques like Fine-tuning, Fine-tuning with DeepSpeed, Parameter-efficient fine-tuning (PEFT), Additive PEFT: Prompt Tuning, and Re-parameterization PEFT: LoRA. We will also cover the evaluation of LLMs, LLMOps which involves creating a Hugging Face pipeline and tracking LLM development with Mlflow, as well as the risks and challenges associated with Generative AI.
Models that will be discussed include DBRX (Databricks), Gemma (Google), ChatGPT, GPT-3 (OpenAI), LLaMa (Meta), Dolly (Databricks), and MPT (MosaicML).
Shop Location | Beijing, China |
No reviews found!
No comments found for this product. Be the first to comment!