NewBie image Exp0.1
Efficient Image Generation Base Model Based on Next-DiT
๐งฑ Exp0.1 Base
NewBie image Exp0.1 is a 3.5B parameter DiT model developed through research on the Lumina architecture. Building on these insights, it adopts Next-DiT as the foundation to design a new NewBie architecture tailored for text-to-image generation. The NewBie image Exp0.1 model is trained within this newly constructed system, representing the first experimental release of the NewBie text-to-image generation framework.
Text Encoder
We use Gemma3-4B-it as the primary text encoder, conditioning on its penultimate-layer token hidden states. We also extract pooled text features from Jina CLIP v2, project them, and fuse them into the time/AdaLN conditioning pathway. Together, Gemma3-4B-it and Jina CLIP v2 provide strong prompt understanding and improved instruction adherence.
VAE
Use the FLUX.1-dev 16channel VAE to encode images into latents, delivering richer, smoother color rendering and finer texture detail helping safeguard the stunning visual quality of NewBie image Exp0.1.
๐ผ๏ธ Task type
NewBie image Exp0.1 is pretrain on a large corpus of high-quality anime data, enabling the model to generate remarkably detailed and visually striking anime style images.
We reformatted the dataset text into an XML structured format for our experiments. Empirically, this improved attention binding and attribute/element disentanglement, and also led to faster convergence.
Besides that, It also supports natural language and tags inputs.
In multi character scenes, using XML structured prompt typically leads to more accurate image generation results.
XML structured prompt
<character_1>
<n>$character_1$</n>
<gender>1girl</gender>
<appearance>chibi, red_eyes, blue_hair, long_hair, hair_between_eyes, head_tilt, tareme, closed_mouth</appearance>
<clothing>school_uniform, serafuku, white_sailor_collar, white_shirt, short_sleeves, red_neckerchief, bow, blue_skirt, miniskirt, pleated_skirt, blue_hat, mini_hat, thighhighs, grey_thighhighs, black_shoes, mary_janes</clothing>
<expression>happy, smile</expression>
<action>standing, holding, holding_briefcase</action>
<position>center_left</position>
</character_1>
<character_2>
<n>$character_2$</n>
<gender>1girl</gender>
<appearance>chibi, red_eyes, pink_hair, long_hair, very_long_hair, multi-tied_hair, open_mouth</appearance>
<clothing>school_uniform, serafuku, white_sailor_collar, white_shirt, short_sleeves, red_neckerchief, bow, red_skirt, miniskirt, pleated_skirt, hair_bow, multiple_hair_bows, white_bow, ribbon_trim, ribbon-trimmed_bow, white_thighhighs, black_shoes, mary_janes, bow_legwear, bare_arms</clothing>
<expression>happy, smile</expression>
<action>standing, holding, holding_briefcase, waving</action>
<position>center_right</position>
</character_2>
<general_tags>
<count>2girls, multiple_girls</count>
<style>anime_style, digital_art</style>
<background>white_background, simple_background</background>
<atmosphere>cheerful</atmosphere>
<quality>high_resolution, detailed</quality>
<objects>briefcase</objects>
<other>alternate_costume</other>
</general_tags>
XML structured prompt and attribute/element disentanglement showcase
๐งฐ Model Zoo
๐ Quickstart
- Diffusers
pip install diffusers transformers accelerate safetensors torch --upgrade
# Recommended: install FlashAttention and Triton according to your operating system.
import torch
from diffusers import NewbiePipeline
def main():
model_id = "NewBie-AI/NewBie-image-Exp0.1"
# Load pipeline
pipe = NewbiePipeline.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
).to("cuda")
# use float16 if your GPU does not support bfloat16
prompt = "1girl"
image = pipe(
prompt,
height=1024,
width=1024,
num_inference_steps=28,
).images[0]
image.save("newbie_sample.png")
print("Saved to newbie_sample.png")
if __name__ == "__main__":
main()
- ComfyUI
๐ช Training procedure
๐ฌ Participate
Core
Members
โจ Acknowledgments
- Thanks to the Alpha-VLLM Org for open sourcing the advanced Lumina family. which has been invaluable for our research.
- Thanks to Google for open sourcing the powerful Gemma3 LLM family
- Thanks to the Jina AI Org for open sourcing the Jina family, enabling further research.
- Thanks to Black Forest Labs for open sourcing the FLUX VAE family. powerful 16channel VAE is one of the key components behind improved image quality.
- Thanks to Neta.art for fine-tuning and open sourcing the Lumina-image-2.0 base model. Neta-Lumina gives us the opportunity to study the performance of Next-DiT on Anime Types.
- Thanks to DeepGHS/narugo1992/SumomoLee for providing high-quality Anime Datasets.
- Thanks to Nyanko for the early help and support.
๐ Contribute
- Neko, ่กก้ฒ, XiaoLxl, xChenNing, Hapless, Lius
- WindySea, ็ง้บ้บ็ญ่ถ, ๅคๆฏ, Rnglg2, Ly, GHOSTLXH
- Sarara, Seina, KKTๆบๅจไบบ, NoirAlmondL, ๅคฉๆปก, ๆๆถ
- Wenakaๅต, ZhiHu, BounDless, DetaDT, ็ดซๅฝฑใฎใฝใใผใใซ
- ่ฑ็ซๆตๅ , R3DeK, ๅฃไบบA, ็็็, ไนพๅคๅSennke, ็ ้
- Heathcliff01, ๆ ้ณ, MonitaChan, WhyPing, TangRenLan
- HomemDesgraca, EPIC, ARKBIRD, Talan, 448, Hugs288
๐งญ Community Guide
Getting Started Guide
LoRa Trainer
๐ฌ Communication
๐ License
Model Weights: Newbie Non-Commercial Community License (Newbie-NC-1.0).
- Applies to: model weights/parameters/configs and derivatives (fine-tunes, LoRA, merges, quantized variants, etc.)
- For Non Commercial use only, and must be shared under the same license.
- See LICENSE.md
Code: Apache License 2.0.
- Applies to: training/inference scripts and related source code in this project.
- See: Apache-2.0
โ ๏ธ Disclaimer
This model may produce unexpected or harmful outputs. Users are solely responsible for any risks and potential consequences arising from its use.
- Downloads last month
- 69

