Overall 7 years with at least 5 years of experience working with transformerbased models and NLP tasks with a focus on text generation summarization question answering classification and similar tasks. Expertise in transformer models like GPT (Generative Pretrained Transformer) BERT (Bidirectional Encoder Representations from Transformers) T5 (TexttoText Transfer Transformer) RoBERTa and similar models. Familiarity with model architectures attention mechanisms and selfattention layers that enable LLMs to generate humanlike text. Experience in finetuning pretrained models on domainspecific datasets for tasks such as text generation summarization question answering classification and translation. Familiarity with concepts like attention mechanisms context windows tokenization and embedding layers. Awareness of biases hallucinations and knowledge cutoffs that can affect LLM performance and output quality. Expertise in crafting clear concise and contextually relevant prompts to guide LLMs towards generating desired outputs. Experience in instructionbased prompting Use of zeroshot fewshot and manyshot learning techniques for maximizing model performance without retraining. Experience in iterating on prompts to refine outputs test model performance and ensure consistent results. Crafting prompt templates for repetitive tasks ensuring prompts are adaptable to different contexts and inputs. Expertise in chainofthought (CoT) prompting to guide LLMs through complex reasoning tasks by encouraging stepbystep breakdowns. Proficiency in Python and experience with NLP libraries (e.g. Hugging Face SpaCy NLTK). Experience with transformerbased models (e.g. GPT BERT T5 for text generation tasks. Experience in training finetuning and deploying machine learning models in an NLP context. Understanding of model evaluation metrics (e.g. BLEU ROUGE)