List of open-source LLM datasets
Naufal Suryanto PRO
naufalso
AI & ML interests
AI Security, Computer Vision, Adversarial Machine Learning, Industrial and Applied AI
Recent Activity
updated
a collection
5 days ago
Open Source LLM Datasets updated
a collection
5 days ago
Open Source LLM Datasets updated
a collection
5 days ago
Open Source LLM Datasets Organizations
Paper to Read (LLM Training and Function Calling)
-
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Paper • 2004.10964 • Published -
ToolACE: Winning the Points of LLM Function Calling
Paper • 2409.00920 • Published • 2 -
Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks
Paper • 2407.00121 • Published
State-of-the-art Open-Source LLM (General)
Collections of SOTA Open-source LLM
Paper to Read (Agent Safety Benchmark)
List of Paper for AI Agent Safety Benchmark
-
AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents
Paper • 2410.09024 • Published • 1 -
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
Paper • 2410.02644 • Published -
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
Paper • 2402.04249 • Published • 7
LLM in Cybersecurity
List of paper to read for LLM in cybersecurity
-
Generative AI and Large Language Models for Cyber Security: All Insights You Need
Paper • 2405.12750 • Published • 3 -
Ollabench: Evaluating LLMs' Reasoning for Human-centric Interdependent Cybersecurity
Paper • 2406.06863 • Published -
Large Language Models for Cyber Security: A Systematic Literature Review
Paper • 2405.04760 • Published • 1 -
Large Language Models in Cybersecurity: State-of-the-Art
Paper • 2402.00891 • Published • 3
Open Source LLM Datasets
List of open-source LLM datasets
Paper to Read (Agent Safety Benchmark)
List of Paper for AI Agent Safety Benchmark
-
AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents
Paper • 2410.09024 • Published • 1 -
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
Paper • 2410.02644 • Published -
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
Paper • 2402.04249 • Published • 7
Paper to Read (LLM Training and Function Calling)
-
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Paper • 2004.10964 • Published -
ToolACE: Winning the Points of LLM Function Calling
Paper • 2409.00920 • Published • 2 -
Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks
Paper • 2407.00121 • Published
LLM in Cybersecurity
List of paper to read for LLM in cybersecurity
-
Generative AI and Large Language Models for Cyber Security: All Insights You Need
Paper • 2405.12750 • Published • 3 -
Ollabench: Evaluating LLMs' Reasoning for Human-centric Interdependent Cybersecurity
Paper • 2406.06863 • Published -
Large Language Models for Cyber Security: A Systematic Literature Review
Paper • 2405.04760 • Published • 1 -
Large Language Models in Cybersecurity: State-of-the-Art
Paper • 2402.00891 • Published • 3
State-of-the-art Open-Source LLM (General)
Collections of SOTA Open-source LLM