P-tuning v2
P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks
An optimized prompt tuning strategy for smaller models and hard natural language understanding (NLU) tasks (e.g., sequence tagging).
This repo is still under construction (2-4 weeks expected). Your kindly starring our repo could encourage us to work harder :)