Global Fintech Series had a one on one interaction with Dan Adamson who is among the founder member of this unique model- AutoAlign. Key points of discussion was around why solutions like AutoAlign are needed for generative AI and how it could be beneficial for companies in various industries. Today, Armilla unveils its new product, AutoAlign, a web-based platform with a Low-code solution for reducing misogyny, gender bias, hallucinations, and detrimental responses, as well as eliminating bias.
Armilla AI is a Quality Assurance platform for models allowing large enterprises to govern, deploy and scale their AI/ML systems. This model has been floated by this organization.AutoAlign, unlike other responsible AI solutions that merely erect additional safety barriers, uses AI to generate data, perform tests, identify the weak points in an AI model, and then repair them. AutoAlign can fine-tune a model in a matter of hours, making safety testing seamless and accessible for businesses that lack the time and resources to do so manually.
Latest Fintech News:Â Jack Henry Launches Real-Time Payments Fraud Feature
The tool can be deployed on the private cloud servers of an organization, where it can remain wholly internal, or it can be made accessible to customers; in either case, it can protect personally identifiable information (PII) and other sensitive and encrypted data. The company, supported by Y-Combinator, Naval Ravikant, and Dr. Yoshua Bengio, is launching a new platform called AutoAlign that will enable businesses to manage the behavior of their generative AI model.
AutoAlign’s fine-tuning controls allow the user to create new “alignment goals,” such as responses that should not assume gender based on profession, and optimize the model’s training to meet these goals for businesses and organizations that wish to avoid using a model that assumes the managing director identifies as male. Adamson demonstrated how the same AutoAlign-tuned model produced the gender-neutral pronoun “they” when presented with the same identical language. There are, however, a number of cautionary tales that have emerged as businesses and their employees grapple with how to safely experiment with GenAI.
Consider the Samsung employees who shared confidential information or the attorney who consulted ChatGPT only to receive fabricated, hallucinated court cases that he used in his own arguments. Or, the recent example of a “wellness chatbot” that was removed offline after providing “harmful” responses to at least one user regarding eating disorders and dieting. Fortunately, software vendors are racing to assist in the resolution of these issues.
Latest Fintech News: BearingPoint Empowers Burkina Faso’s Ministry of Finance with Blockchain
[To share your insights with us, please write to sghosh@martechseries.com]