CyberSec AI Portfolio - Datasets, Models & Spaces
Collection
80+ datasets, 35 Spaces & 4 models for cybersecurity AI: RGPD, NIS2, ISO 27001, DORA, AI Act, MITRE ATT&CK & more. By Ayi NEDJIMI.
β’
139 items
β’
Updated
Assistant specialise en cybersecurite
This is the merged / standalone version of AYI-NEDJIMI/CyberSec-Assistant-3B. The LoRA adapter weights have been fully merged into the base model (Qwen/Qwen2.5-3B-Instruct), so no PEFT library is required at inference time.
| Property | Value |
|---|---|
| Base model | Qwen/Qwen2.5-3B-Instruct |
| Adapter version | AYI-NEDJIMI/CyberSec-Assistant-3B |
| Parameters | 3B |
| LoRA rank (r) | 64 |
| LoRA alpha | 128 |
| Precision | float16 |
| License | Apache 2.0 |
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "AYI-NEDJIMI/CyberSec-Assistant-3B-Merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain the key principles of cybersecurity."},
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Note: No need to install or import
peftβ this model is fully standalone.
This model was fine-tuned using LoRA (Low-Rank Adaptation) with the following configuration:
The adapter weights were then merged into the base model using model.merge_and_unload() from the PEFT library to produce this standalone checkpoint.