| Question Answering Model | |
| Overview | |
| This BERT-based model extracts answers from given context passages in response to questions. Fine-tuned on SQuAD-like datasets, it provides precise span-based answers for reading comprehension tasks. | |
| Model Architecture | |
| Utilizes BERT with 12 layers, 768 hidden units, and 12 attention heads, topped with a question answering head that predicts start and end tokens for answer spans. | |
| Intended Use | |
| Ideal for chatbots, search engines, or educational tools requiring factual extraction from text. It handles English queries and contexts up to 512 tokens. | |
| Limitations | |
| The model may fail on ambiguous questions, out-of-context queries, or non-English text. It assumes the answer is present in the context and could propagate biases from training data. |