File size: 7,436 Bytes
a5eaebe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
# Responsible-AI-Moderation Model
## Table of Contents
- [Introduction](#introduction)
- [Features](#features)
- [Installation](#installation)
- [Set Configuration Variables](#set-configuration-variables)
- [Models Required](#models-required)
- [Running the Application](#running-the-application)
- [Docker Image](#Docker-Image)
- [License](#license)
- [Contact](#contact)
## Introduction
The **Moderation Model** module acts as a central hub for machine learning models for prompt injection, toxicity, jailbreak, restricted topic, custom theme and refusal checks. It provides the endpoints to utilize the response generated by these models.
## Features
The **Moderation Model** module acts as a wrapper for the traditional AI models we are using for various checks like prompt injection, jailbreak, toxicity etc.
## Installation
To run the application, first we need to install Python and the necessary packages:
1. Install Python (version 3.11.x) from the [official website](https://www.python.org/downloads/) and ensure it is added to your system PATH.
2. Clone the repository : responsible-ai-mm-flask:
```sh
git clone <repository-url>
```
3. Navigate to the `responsible-ai-mm-flask` directory:
```sh
cd responsible-ai-mm-flask
```
4. Create a virtual environment:
```sh
python -m venv venv
```
5. Activate the virtual environment:
- On Windows:
```sh
.\venv\Scripts\activate
```
6. Go to the `requirements` directory where the `requirement.txt` file is present.
In the `requirement.txt` file comment the
```sh
lib/torch-2.2.0+cu118-cp39-cp39-linux_x86_64.whl
```
**Note:** Download appropriate torch version supporting python version which is installed [i.e if Python version is 3.10 use torch-2.2.0+cu118-**cp310**-**cp310**-**linux**_x86_64.whl, where cp310 denotes python version 3.10 and linux denotes OS which can be linux/win and **_not applicable for Mac_**]
**Note:** If working in windows as this is for linux and replace
```sh
lib/
```
with
```sh
../lib/
```
**Note:** If working in Mac Os, run the below command after running requirement.txt
```sh
pip install --pre torch torchvision torchaudio \--extra-index-url https://download.pytorch.org/whl/nightly/cpu
```
Download and place the en_core_web_lg-3.5.0-py3-none-any.whl inside the lib folder.
[en_core_web_lg](https://github.com/explosion/spacy-models/releases/download/en_core_web_lg-3.5.0/en_core_web_lg-3.5.0-py3-none-any.whl) and install the requirements:
```sh
pip install -r requirement.txt
```
**Note:** when running requirement.txt, if getting error related to "cuda-python" then comment cuda-python from
requirement.txt file and run pip install again
Install the fastapi library as well, use the following command:
```sh
pip install fastapi
```
## Set Configuration Variables
After installing all the required packages, configure the variables necessary to run the APIs.
1. Navigate to the `src` directory:
```sh
cd ..
```
2. Locate the `.env` file, which contains keys like the following:
```sh
workers=1
WORKERS="${workers}"
# DB_NAME="${dbname}"
# DB_USERNAME="${username}"
# DB_PWD="${password}"
# DB_IP="${ipaddress}"
# DB_PORT="${port}"
# MONGO_PATH="mongodb://${DB_USERNAME}:${DB_PWD}@${DB_IP}:${DB_PORT}/"
# MONGO_PATH= "mongodb://localhost:27017/"
```
3. Replace the placeholders with your actual values.
## Models Required
The following models are required to run the application. Download all the model files from the links provided, and place it in the folder name provided.
1. [Prompt Injection](https://huggingface.co/deepset/deberta-v3-base-injection/tree/main)
Files required to download here are : model.safetensors, config.json, tokenizer_config.json, tokenizer.json, special_tokens_map.json.
Name the folder as 'dbertaInjection'.
2. [Restricted Topic](https://huggingface.co/MoritzLaurer/deberta-v3-base-zeroshot-v2.0/tree/main)
Files required to download here are : model.safetensors, added_tokens.json, config.json, special_tokens_map.json, spm.model, tokenizer.json, tokenizer_config.json.
Name the folder as 'restricted-dberta-base-zeroshot-v2'.
3. [Sentence Transformer Model](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1/tree/main)
Files required to download here are : 1_Pooling folder, pytorch_model.bin, vocab.txt, tokenizer.json, tokenizer_config.json, special_tokens_map.json, sentence_bert_config.json, modules.json, config.json, config_sentence_transformers.json.
Name the folder as 'multi-qa-mpnet-base-dot-v1'.
4. [Detoxify](https://huggingface.co/FacebookAI/roberta-base/tree/main)
Files required to download here are : vocab.json, tokenizer.json, merges.txt, config.json.
Now download the model checkpoint file from this url and keep it under this folder -
[toxic_model_ckpt_file](https://github.com/unitaryai/detoxify/releases/download/v0.3-alpha/toxic_debiased-c7548aa0.ckpt)
Name the folder as 'detoxify'.
5. [Gibberish](https://huggingface.co/madhurjindal/autonlp-Gibberish-Detector-492513457)
Files required to download here are : vocab.json, tokenizer.json,config.json,pytorch_model.bin, tokenizer_config.json,special_tokens_map.json.
Name the folder as 'gibberish'.
6. [Bancode](https://huggingface.co/vishnun/codenlbert-tiny)
Files required to download here are : vocab.txt, tokenizer.json,config.json,pytorch_model.bin, tokenizer_config.json,special_tokens_map.json.
Name the folder as 'bancode'.
7. [Restricted Topic](https://huggingface.co/cross-encoder/nli-MiniLM2-L6-H768)
Files required to download here are : vocab.json, tokenizer.json,config.json,merges.txt,pytorch_model.bin, tokenizer_config.json,special_tokens_map.json.
Name the folder as 'nli-MiniLM2-L6-H768'.
Place the above folders in a folder named 'models' in the following way: 'responsible-ai-mm-flask/models'.
## Running the Application
Once we have completed all the aforementioned steps, we can start the service.
1. Navigate to the `src` directory:
2. Run `main_MM.py` file:
```sh
python main_MM.py
```
3. PORT_NO : Use the Port No that is configured in .env file.
Open the following URL in your browser:
`http://localhost:8000/rai/v1/raimoderationmodels/docs`
**Note:** :
1. To address the issue where the Passport Number is not recognized in Privacy, modify the "piiEntitiesToBeRedacted" field in the privacy() under service.py file (line no: 98) from None to an empty list []. This adjustment ensures that the Passport Number is correctly identified.
2. Do not use this Moderation Model repository as a standalone repository. It serves as the base or dependency for the Moderation Layer repository, which provides the 'Guardrail' functionality, so access this repository APIs through Moderation layer.
## Docker Image
The Docker image for the ModerationModel module has been published on Docker Hub. You can access it here: [ModerationModel image](https://hub.docker.com/repository/docker/infosysresponsibleaitoolkit/responsible-ai-moderationmodel)
## License
The source code for the project is licensed under the MIT license, which you can find in the [LICENSE.txt](LICENSE.txt) file.
## Contact
If you have more questions or need further insights please feel free to connect with us @ Infosysraitoolkit@infosys.com
|