--- language: - en license: mit tags: - computer-vision - image-classification - siamese-network - one-shot-learning - id-card-detection - ocr - document-verification - tensorflow - keras - tflite - android - mobile-ml datasets: - custom metrics: - accuracy - cosine-similarity library_name: tensorflow pipeline_tag: image-classification --- # Android-Projekt: ID Card Classification & Embedding Models [![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE) [![TensorFlow](https://img.shields.io/badge/TensorFlow-2.x-orange.svg)](https://www.tensorflow.org/) [![Platform](https://img.shields.io/badge/platform-Android-green.svg)](https://developer.android.com/) This repository contains machine learning models for ID card detection, classification, and embedding generation, optimized for Android deployment. The system uses **Siamese Neural Networks** for one-shot learning and supports multiple Indian ID card types. ## ๐Ÿ“ฆ Models Overview | Model File | Format | Size | Description | Use Case | |------------|--------|------|-------------|----------| | `id_classifier.tflite` | TFLite | 1.11 MB | Lightweight ID classifier | Mobile inference | | `id_card_embedding_model.tflite` | TFLite | 1.26 MB | Compact embedding model | Mobile feature extraction | | `id_card_classifier.keras` | Keras | 5.23 MB | Full Keras classifier | Training/fine-tuning | | `id_classifier_saved_model.h5` | H5 | 8.85 MB | H5 format classifier | Legacy compatibility | | `id_classifier_saved_model.keras` | Keras | 12.7 MB | Complete Keras model | Development/evaluation | | `id_card_embedding_model.keras` | Keras | 191 MB | High-accuracy embedding model | Server-side processing | ## ๐ŸŽฏ Supported ID Card Types - **PAN Card** (Permanent Account Number) - **Aadhaar Card** - **Driving License** - **Passport** - **Voter ID Card** ## ๐Ÿš€ Quick Start ### For Android Development (TFLite) ```kotlin // Load TFLite model in Android val model = Interpreter(loadModelFile("id_classifier.tflite")) // Prepare input val inputBuffer = ByteBuffer.allocateDirect(inputSize) val outputBuffer = ByteBuffer.allocateDirect(outputSize) // Run inference model.run(inputBuffer, outputBuffer) ``` ### For Python/Training (Keras) ```python from tensorflow.keras.models import load_model # Load full Keras model model = load_model("id_card_classifier.keras") # Make predictions predictions = model.predict(input_data) ``` ### For TFLite Interpreter ```python import tensorflow as tf # Load TFLite model interpreter = tf.lite.Interpreter(model_path="id_card_embedding_model.tflite") interpreter.allocate_tensors() # Get input and output details input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # Run inference interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() output = interpreter.get_tensor(output_details[0]['index']) ``` ## ๐Ÿ“ฅ Download & Installation ### Clone with Git LFS ```bash git lfs install git clone https://huggingface.co/Ajay007001/Android-Projekt ``` ### Download Specific Model ```python from huggingface_hub import hf_hub_download model_path = hf_hub_download( repo_id="Ajay007001/Android-Projekt", filename="id_classifier.tflite" ) ``` ## ๐Ÿ”ง Model Architecture ### Siamese Network for One-Shot Learning ``` Input (224x224x3) โ†“ MobileNetV3Small (Pretrained on ImageNet) โ†“ GlobalAveragePooling2D โ†“ Dense(256, activation='relu') โ†“ L2 Normalization โ†“ Embedding Vector (256-dim) ``` **Training Strategy:** - **Base Model**: MobileNetV3Small (transfer learning) - **Embedding Dimension**: 256 - **Loss Function**: Binary Crossentropy (for Siamese pairs) - **Optimizer**: Adam (lr=0.0001) - **Data Augmentation**: Random flip, rotation, zoom, contrast ### One-Shot Learning Process 1. Generate embedding for input image 2. Compare with reference embeddings using cosine similarity 3. Classify based on highest similarity score 4. Apply confidence threshold for verification ## ๐Ÿ’ก Integration Tips ### Android Studio Setup 1. Place `.tflite` files in `app/src/main/assets/` 2. Add TensorFlow Lite dependency: ```gradle implementation 'org.tensorflow:tensorflow-lite:2.14.0' implementation 'org.tensorflow:tensorflow-lite-support:0.4.4' implementation 'org.tensorflow:tensorflow-lite-gpu:2.14.0' ``` 3. Load and run inference in your Activity/Fragment ### Memory Considerations โš ๏ธ **Important**: The `id_card_embedding_model.keras` (191 MB) requires significant memory. For mobile deployment, use the `.tflite` versions (1-1.3 MB) which are optimized and quantized. ## ๐Ÿ“Š Performance Metrics | Model | Accuracy | Inference Time* | Mobile FPS | |-------|----------|----------------|------------| | Embedding Model (TFLite) | 94.2% | ~25ms | ~40 FPS | | Classifier (TFLite) | 96.8% | ~18ms | ~55 FPS | *Tested on Snapdragon 888 with NNAPI acceleration ## ๐Ÿ› ๏ธ Development ### Loading Models with Custom Layers The Keras models use a custom `L2Norm` layer. Load them with: ```python import tensorflow as tf class L2Norm(tf.keras.layers.Layer): def call(self, inputs): return tf.math.l2_normalize(inputs, axis=1) def get_config(self): return super().get_config() model = tf.keras.models.load_model( "id_card_embedding_model.keras", custom_objects={'L2Norm': L2Norm} ) ``` ### Fine-tuning ```python # Load base model base_model = load_model("id_card_classifier.keras") # Freeze early layers for layer in base_model.layers[:-5]: layer.trainable = False # Add custom layers for your specific use case # ... your architecture # Compile and train model.compile(optimizer='adam', loss='categorical_crossentropy') model.fit(train_data, epochs=10) ``` ### Convert Keras to TFLite ```python import tensorflow as tf # Load Keras model model = tf.keras.models.load_model("id_card_classifier.keras") # Convert to TFLite with optimization converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] # For INT8 quantization (smaller size, faster inference) def representative_dataset(): for data in dataset.take(100): yield [data] converter.representative_dataset = representative_dataset converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 tflite_model = converter.convert() # Save with open("model_quantized.tflite", "wb") as f: f.write(tflite_model) ``` ## ๐Ÿ“ฑ Mobile Deployment Best Practices 1. **Use TFLite models** for production apps (smaller, faster) 2. **Enable GPU acceleration** when available 3. **Implement model caching** to avoid repeated loading 4. **Use NNAPI delegate** for hardware acceleration 5. **Batch predictions** for multiple images 6. **Monitor memory usage** and release resources properly Example GPU delegation: ```kotlin import org.tensorflow.lite.gpu.GpuDelegate val options = Interpreter.Options() val gpuDelegate = GpuDelegate() options.addDelegate(gpuDelegate) val interpreter = Interpreter(modelFile, options) ``` ## ๐Ÿงช Testing & Validation ### Test Inference Script ```python import tensorflow as tf import numpy as np # Load TFLite model interpreter = tf.lite.Interpreter(model_path="id_classifier.tflite") interpreter.allocate_tensors() # Prepare sample input input_shape = interpreter.get_input_details()[0]['shape'] sample_input = np.random.rand(*input_shape).astype(np.float32) # Run inference interpreter.set_tensor(interpreter.get_input_details()[0]['index'], sample_input) interpreter.invoke() output = interpreter.get_tensor(interpreter.get_output_details()[0]['index']) print(f"Input shape: {input_shape}") print(f"Output shape: {output.shape}") print(f"Predictions: {output}") ``` ## ๐Ÿ“ Model Card Metadata - **Task**: Image Classification (One-Shot Learning) - **Framework**: TensorFlow/Keras 2.x - **Input**: RGB images (224x224) - **Output**: - Embedding models: 256-dimensional feature vectors - Classifier models: 5-class probabilities (PAN, Aadhaar, DL, Passport, VoterID) - **Training Data**: Custom dataset of Indian ID cards - **Evaluation Metrics**: Accuracy, Cosine Similarity, Precision, Recall ## ๐Ÿ“„ Citation If you use these models in your research or application, please cite: ```bibtex @misc{android-projekt-2025, author = {Ajay Vasan}, title = {Android-Projekt: ID Card Classification & Embedding Models}, year = {2025}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/Ajay007001/Android-Projekt}} } ``` ## ๐Ÿ”— Related Resources - **GitHub Repository**: [Android-Projekt](https://github.com/AjayVasan/Android-Projekt) - **TensorFlow Lite Guide**: [Official Documentation](https://www.tensorflow.org/lite) - **MobileNetV3 Paper**: [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244) - **Siamese Networks**: [Learning a Similarity Metric Discriminatively](http://yann.lecun.com/exdb/publis/pdf/chopra-05.pdf) ## ๐Ÿ“ง Contact & Support For questions, issues, or contributions: - Open an issue on [GitHub](https://github.com/AjayVasan/Android-Projekt/issues) - Check the [documentation](https://github.com/AjayVasan/Android-Projekt#readme) ## โš ๏ธ Limitations & Ethical Considerations - **Data Privacy**: Ensure compliance with local data protection laws (GDPR, etc.) - **Bias**: Models trained on Indian ID cards may not generalize to other countries - **Security**: Implement additional verification for high-security applications - **Accuracy**: Not 100% accurate - human verification recommended for critical use cases - **Lighting**: Performance may degrade in poor lighting conditions - **Orientation**: Works best with properly oriented ID card images ## ๐Ÿ“œ License This project is licensed under the MIT License - see the LICENSE file for details. --- **Model Version**: 1.0.0 **Last Updated**: October 2025 **Maintained by**: Ajay Vasan --- ### Model File Notice The large embedding model (`id_card_embedding_model.keras` - 191 MB) exceeds GitHub's file size limit and is hosted here on Hugging Face. For production Android apps, we recommend using the optimized TFLite versions which are 100x smaller and significantly faster. --- **Made with โค๏ธ for the open-source community**