|
|
--- |
|
|
language: ["es", "en"] |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- bittensor |
|
|
- subnet-20 |
|
|
- bitagent |
|
|
- finney |
|
|
- tao |
|
|
- tool-calling |
|
|
- bfcl |
|
|
- reasoning |
|
|
- agent |
|
|
base_model: Salesforce/xLAM-7b-r |
|
|
pipeline_tag: text-generation |
|
|
model-index: |
|
|
- name: antonio-bfcl-toolmodel |
|
|
results: |
|
|
- task: |
|
|
type: text-generation |
|
|
name: Generative reasoning and tool-calling |
|
|
metrics: |
|
|
- type: accuracy |
|
|
value: 0.0 |
|
|
--- |
|
|
|
|
|
# 馃 Antonio BFCL Toolmodel |
|
|
|
|
|
Este modelo forma parte del ecosistema **BitAgent (Subnet-20)** de Bittensor, dise帽ado para tareas de *tool-calling*, razonamiento l贸gico estructurado y generaci贸n de texto contextual. |
|
|
Optimizado para comunicaci贸n eficiente entre agentes dentro del protocolo Finney. |
|
|
|
|
|
--- |
|
|
|
|
|
## 馃殌 Descripci贸n t茅cnica |
|
|
|
|
|
**antonio-bfcl-toolmodel** est谩 basado en un modelo open-source tipo `xLAM-7b-r`, ajustado para: |
|
|
|
|
|
- 馃搳 *Razonamiento simb贸lico y factual multiling眉e* |
|
|
- 馃З *Tool-calling autom谩tico* (formato JSON conforme a los prompts de Subnet-20) |
|
|
- 馃攧 *Respuestas deterministas* con `temperature=0.1` y `top_p=0.9` |
|
|
- 鈿欙笍 *Compatibilidad total con el pipeline de BitAgent Miner (v1.0.51)* |
|
|
- 馃寪 *Idiomas soportados*: Espa帽ol 馃嚜馃嚫 e Ingl茅s 馃嚞馃嚙 |
|
|
|
|
|
--- |
|
|
|
|
|
## 馃З Integraci贸n con Subnet-20 |
|
|
|
|
|
Los validadores pueden invocar este modelo a trav茅s de los protocolos: |
|
|
|
|
|
- `QueryTask` |
|
|
- `QueryResult` |
|
|
- `IsAlive` |
|
|
- `GetHFModelName` |
|
|
- `SetHFModelName` |
|
|
|
|
|
El modelo responde mediante `bittensor.dendrite` y cumple con la especificaci贸n **BitAgent v1.0.51**. |
|
|
|
|
|
--- |
|
|
|
|
|
## 馃 Ejemplo de inferencia local |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
|
|
model_name = "Tonit23/antonio-bfcl-toolmodel" |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
model = AutoModelForCausalLM.from_pretrained(model_name) |
|
|
|
|
|
prompt = "Resuelve esta operaci贸n: 12 + 37 = " |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
outputs = model.generate(**inputs, max_new_tokens=32) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
|