sweatSmile's picture
Update README.md
7439f5d verified
---
base_model: microsoft/DialoGPT-medium
pipeline_tag: text-generation
library_name: transformers
tags:
- conversational-ai
- finance
- fintech
- investment-banking
- banking
- lora
- financial-advisor
- portfolio-management
- risk-assessment
- financial-analysis
- trading
- markets
- wealth-management
- compliance
- regulatory
- client-advisory
- chatbot
- nlp
language:
- en
license: mit
datasets:
- gbharti/finance-alpaca
metrics:
- perplexity
- bleu
widget:
- text: "<|user|> What are the key risks in equity trading? <|bot|>"
example_title: "Risk Assessment Query"
- text: "<|user|> Explain portfolio diversification strategies <|bot|>"
example_title: "Investment Strategy"
- text: "<|user|> How do interest rates affect bond prices? <|bot|>"
example_title: "Market Analysis"
---
# DialoGPT-FinTech-Investment-Banking-Assistant
Fine-tuned DialoGPT-medium for financial conversations, investment advice, and banking operations using LoRA on finance-specific dataset.
## Overview
- **Base Model:** microsoft/DialoGPT-medium (355M parameters)
- **Fine-tuning Method:** LoRA (4-bit quantization)
- **Dataset:** Financial Alpaca dataset (2K finance samples)
- **Training:** 2 epochs with optimized hyperparameters
## Key Features
- Financial market analysis and investment guidance
- Banking operations and regulatory compliance
- Risk assessment and portfolio management
- Conversational interface for financial queries
- Optimized for UK fintech and investment banking
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("sweatSmile/DialoGPT-FinTech-Investment-Banking-Assistant")
tokenizer = AutoTokenizer.from_pretrained("sweatSmile/DialoGPT-FinTech-Investment-Banking-Assistant")
# Financial conversation example
prompt = "<|user|> What are the key risks in equity trading? <|bot|>"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Applications
- Investment banking client advisory
- Fintech customer support automation
- Financial education and training
- Risk management consultation
- Portfolio analysis assistance
## Training Details
- LoRA rank: 8, alpha: 16
- 4-bit quantization with fp16 precision
- Learning rate: 2e-4 with linear scheduling
- Batch size: 4, Max length: 512 tokens
- Specialized for financial domain conversations
Built for deployment in financial services and fintech environments.