Securing and controlling adversarial AI
Tell stories with AI
In this age of rapidly growing AI, an average 75% of the kids now a days have a phone at the age of 12. Still third of AI community thinks AI can do side ways If adults don't believe in AI, How would they belive AI in the hands of their children
Goal: Write stories for children without harmful text and simplified text generation process.
Updated goal: security depried AI API - pull into any system and the AI adapts to the own domain starting with childrens stories AI model to adapt to AI models and provides with the best security that is sutiable for it's own. Start with :training model on adversial techniques
Things to be achived in this project : Evaluation of baises Risk management Elimination of toxicity Non-harmful text generation Controlled hallucinations Easy content generation
Base : use facebook llama2 RoBerta Modelling NLP Integration find a way through large dataset Masking techniques More efficient , more epoches
Trail 1: Using of 2 activation functions in one nn Implement security on AI
Trail 2: while using optimizer, adding ARCA - coordinate accent algorithm Iteratively updating a token
Output: Restrictions of words, controlled outcomes Improvinf the capability of learning the non harmful way
Referance: Adversarial attack on AI - Camenegie mellon university
Adversary machine learning AML is the process of extracting information about the behaviour and characteristics of a machine learning model Also, learning to manipulate the input into a ML model in order to reach a better outcome (This project: also restrict the output)